Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153439 stories
·
33 followers

Trust, Design, and the Reality of Security Engineering in an AI-Driven World - Patricia R

1 Share
From: NDC
Duration: 11:37
Views: 4

This talk was recorded at NDC Security in Oslo, Norway. #ndcsecurity #ndcconferences #security #developer #softwaredeveloper

Attend the next NDC conference near you:
https://ndcconferences.com
https://ndc-security.com/

Subscribe to our YouTube channel and learn every day: @NDC

Follow our Social Media!

https://www.facebook.com/ndcconferences
https://twitter.com/NDC_Conferences
https://www.instagram.com/ndc_conferences

#ai #machinelearning #applicationsecurity #architecture #bugbounty #cloudsecurity #design #devops

Solution designs are created in the world of possibility, built on high-level promises and assumptions about how components will behave. Security engineers work in reality, uncovering edge cases, handling failures, and raising tickets or feature requests when vendor systems don’t behave as expected. AI creates a snowball effect: Polished marketing material sounds even more convincing, gains stakeholder buy-in, and pushes unverified assumptions forward, widening the gap between ambitious designs and operational reality. The result is a subtle but real load, as engineers try to bridge that gap without burning out or being labelled blockers. This talk shares a bit of rant, real-world stories, and practical suggestions to help you stay sane.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

514: Running Local LLMs in VS Code

1 Share

In this episode James and Frank dive into running AI coding models locally versus in the cloud—BYOK/Open Router, VS Code’s chat/agent harness, model runners (Olama, vLLM), and the practicality of 27B models on a 3090 using 4‑bit quantization. They share hands-on takeaways—how recent engineering (MT/MTPLX) boosts inference to usable token rates, when auto model selection makes sense, cost and hardware trade‑offs, and why local models can liberate your workflow while still needing smarter, unified tooling.

Follow Us

⭐⭐ Review Us ⭐⭐

Machine transcription available on http://mergeconflict.fm

Support Merge Conflict





Download audio: https://aphid.fireside.fm/d/1437767933/02d84890-e58d-43eb-ab4c-26bcc8524289/4aae4fef-6412-4966-bac3-a02bd4d9b0c0.mp3
Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using configurable token lifetimes in Microsoft Entra ID, .NET and Microsoft Graph

1 Share

Configurable token lifetimes in the Microsoft identity platform went GA and I thought I would look at implementing this using a .NET console application using Microsoft Graph . This article looks at implementing this with an delegated user credential as well as an application client credential.

Code: https://github.com/damienbod/EntraIdTokenLifeTimePolicies

The code example was initially created using copilot and the Microsoft documentation. The created code had an number of issues which were fixed and cleaned up but it is good enough for a demo. The security still needs to be improved, if using in a productive environment.

The aim of the code is to set the token lifespan using the new Entra ID feature. By reducing the lifespan of a token in some use cases, it can help to reduce the security risk. This would be useful when using application access tokens for Entra ID setup tasks or other administration flows.

The default service is an implementation in .NET created from the Powershell examples and Github copilot.

using System.Text.Json;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Microsoft.Graph;
using Microsoft.Graph.Models;

namespace EntraIdTokenLifeTimePolicies.Core;

public sealed class TokenLifetimePolicyService(GraphServiceClient graphServiceClient,
    IOptions<TokenLifetimePolicyOptions> options, ILogger<TokenLifetimePolicyService> logger) 
{
    private readonly GraphServiceClient _graphServiceClient = graphServiceClient;
    private readonly TokenLifetimePolicyOptions _options = options.Value;
    private readonly ILogger<TokenLifetimePolicyService> _logger = logger;

    public async Task ApplyPolicyAsync(CancellationToken cancellationToken = default)
    {
        ValidateOptions();

        var servicePrincipal = await FindServicePrincipalAsync(_options.TargetApplicationClientId, cancellationToken);
        if (servicePrincipal?.Id is null)
        {
            throw new InvalidOperationException(
                $"No service principal was found for application client ID '{_options.TargetApplicationClientId}'.");
        }

        var policyDefinition = BuildPolicyDefinition(_options.AccessTokenLifetimeMinutes);
        var policy = await UpsertPolicyAsync(policyDefinition, cancellationToken);

        if (policy.Id is null)
        {
            throw new InvalidOperationException("The created or updated token lifetime policy does not contain an ID.");
        }

        await AssignPolicyToServicePrincipalAsync(servicePrincipal.Id, policy.Id, cancellationToken);
    }

    private async Task<ServicePrincipal?> FindServicePrincipalAsync(string appId, CancellationToken cancellationToken)
    {
        var response = await _graphServiceClient.ServicePrincipals.GetAsync(requestConfiguration =>
        {
            requestConfiguration.QueryParameters.Filter = $"appId eq '{EscapeFilterValue(appId)}'";
            requestConfiguration.QueryParameters.Top = 1;
            requestConfiguration.QueryParameters.Select = ["id", "appId", "displayName"];
        }, cancellationToken);

        var servicePrincipal = response?.Value?.FirstOrDefault();
        _logger.LogInformation("Resolved target service principal: {DisplayName} ({ServicePrincipalId})", servicePrincipal?.DisplayName, servicePrincipal?.Id);
        return servicePrincipal;
    }

    private async Task<TokenLifetimePolicy> UpsertPolicyAsync(string definition, CancellationToken cancellationToken)
    {
        var existingPolicies = await _graphServiceClient.Policies.TokenLifetimePolicies.GetAsync(requestConfiguration =>
        {
            requestConfiguration.QueryParameters.Filter = $"displayName eq '{EscapeFilterValue(_options.PolicyDisplayName)}'";
            requestConfiguration.QueryParameters.Top = 1;
            requestConfiguration.QueryParameters.Select = ["id", "displayName", "definition"];
        }, cancellationToken);

        var existingPolicy = existingPolicies?.Value?.FirstOrDefault();
        var updateBody = new TokenLifetimePolicy
        {
            Definition = [definition],
            IsOrganizationDefault = false,
            DisplayName = _options.PolicyDisplayName,
        };

        if (existingPolicy?.Id is not null)
        {
            _logger.LogInformation("Updating existing token lifetime policy: {PolicyId}", existingPolicy.Id);
            await _graphServiceClient.Policies.TokenLifetimePolicies[existingPolicy.Id].PatchAsync(updateBody, cancellationToken: cancellationToken);
            existingPolicy.Definition = updateBody.Definition;
            return existingPolicy;
        }

        _logger.LogInformation("Creating token lifetime policy: {PolicyDisplayName}", _options.PolicyDisplayName);
        var createdPolicy = await _graphServiceClient.Policies.TokenLifetimePolicies.PostAsync(updateBody, cancellationToken: cancellationToken);
        return createdPolicy ?? throw new InvalidOperationException("Microsoft Graph returned null while creating a token lifetime policy.");
    }

    private async Task AssignPolicyToServicePrincipalAsync(string servicePrincipalId, string policyId, CancellationToken cancellationToken)
    {
        var existingAssignments = await _graphServiceClient.ServicePrincipals[servicePrincipalId].TokenLifetimePolicies.GetAsync(
            requestConfiguration =>
            {
                requestConfiguration.QueryParameters.Select = ["id"];
            },
            cancellationToken);

        if (existingAssignments?.Value?.Any(policy => string.Equals(policy.Id, policyId, StringComparison.OrdinalIgnoreCase)) == true)
        {
            _logger.LogInformation("Policy {PolicyId} is already assigned to service principal {ServicePrincipalId}.", policyId, servicePrincipalId);
            return;
        }

        var reference = new ReferenceCreate
        {
            OdataId = $"{_graphServiceClient.RequestAdapter.BaseUrl}/policies/tokenLifetimePolicies/{policyId}",
        };

        _logger.LogInformation("Assigning policy {PolicyId} to service principal {ServicePrincipalId}.", policyId, servicePrincipalId);
        await _graphServiceClient.ServicePrincipals[servicePrincipalId].TokenLifetimePolicies.Ref.PostAsync(reference, cancellationToken: cancellationToken);
    }

    private static string BuildPolicyDefinition(int accessTokenLifetimeMinutes)
    {
        var policy = new
        {
            TokenLifetimePolicy = new
            {
                Version = 1,
                AccessTokenLifetime = $"00:{accessTokenLifetimeMinutes}:00",
            },
        };

        return JsonSerializer.Serialize(policy);
    }

    private void ValidateOptions()
    {
        if (string.IsNullOrWhiteSpace(_options.TargetApplicationClientId))
        {
            throw new InvalidOperationException("TokenLifetimePolicy:TargetApplicationClientId is required.");
        }

        if (string.IsNullOrWhiteSpace(_options.PolicyDisplayName))
        {
            throw new InvalidOperationException("TokenLifetimePolicy:PolicyDisplayName is required.");
        }

        if (_options.AccessTokenLifetimeMinutes is < 10 or > 1440)
        {
            throw new InvalidOperationException("TokenLifetimePolicy:AccessTokenLifetimeMinutes must be between 10 and 1440.");
        }
    }

    private static string EscapeFilterValue(string value) => value.Replace("'", "''", StringComparison.Ordinal);
}

This code can then be used in two ways, from an application client or from a delegated client. Each one requires different Graph permissions and authorize using different security flows.

Application permissions

No user is involved in this flow.

An Azure App Registration is used to setup the permissions to access the Graph API. We used an client credentials flow with a client secret to acquire the access token. This is fine for a demo, but using a managed identity would be a better way to use the permissions inside Azure, or a client assertion for non Azure applications. This is not a recommended flow when a user is involved.

The ClientSecretCredential is used to acquire the application access token.

builder.Services.AddSingleton(sp =>
{
    var authOptions = sp
     .GetRequiredService<IOptions<ApplicationAuthenticationOptions>>().Value;

    var credential = new ClientSecretCredential(
        authOptions.TenantId,
        authOptions.ClientId,
        authOptions.ClientSecret);

    return new GraphServiceClient(credential,
      ["https://graph.microsoft.com/.default"]);
});

Then the Microsoft Graph APIs can be used.

  var authenticationOptions = host.Services
           .GetRequiredService<IOptions<ApplicationAuthenticationOptions>>();
  var tokenLifetimePolicyService = host.Services
           .GetRequiredService<TokenLifetimePolicyService>();

  ApplicationAuthenticationOptions.Validate(authenticationOptions.Value);

  logger.LogInformation("Starting app-only flow for tenant {TenantId}.", 
         authenticationOptions.Value.TenantId);

  logger.LogInformation("Required application permissions: {Permissions}", 
        string.Join(", ", 
           authenticationOptions.Value.RequiredApplicationPermissions));

  await tokenLifetimePolicyService.ApplyPolicyAsync(CancellationToken.None);

Testing the application access token

The policy is applied to Azure App registration tokens, not to Graph API tokens. An application ID was added to an App Registration and the access token was requested using the default permission as this is an application and requires no consent like a user does. The token expires in the time defined in the policy.

static async Task TestApplicationTokenPolicy(IHost host, ILogger logger)
{
    // Test token
    var authOptions = host.Services.GetRequiredService<IOptions<ApplicationAuthenticationOptions>>().Value;
    var credential = new ClientSecretCredential(authOptions.TenantId, authOptions.ClientId, authOptions.ClientSecret);

    // Request token for the API (Policy only applies to App registrion, not graph)
    var context = new TokenRequestContext(["api://1ff3f063-8b62-43d7-b323-956291bec8e5/.default"]);
    var response = await credential.GetTokenAsync(context);

    logger.LogInformation("Token acquired UTC: {ExpiresIn}, {Token}", response.ExpiresOn, response.Token);
}

Delegated permissions

The is used when a user is involved. Delegated access tokens should always be used if possible. An OpenID Connect flow is used to acquire the access token. Only delegated permission are used.

This example uses a native client with the InteractiveBrowserCredentialOptions browser. This is a public OpenID Connect client.

builder.Services.AddSingleton(sp =>
{
    var authOptions = sp.GetRequiredService<IOptions<DelegatedAuthenticationOptions>>().Value;

    var credentialOptions = new InteractiveBrowserCredentialOptions
    {
        ClientId = authOptions.ClientId,
        TenantId = authOptions.TenantId,
        RedirectUri = new Uri("http://localhost"), 
    };

    var credential = new InteractiveBrowserCredential(credentialOptions);
    return new GraphServiceClient(credential, authOptions.RequiredDelegatedScopes);
});

The policy is used with the delegated access token using the required permissions.

 var tokenLifetimePolicyService = host.Services.GetRequiredService<TokenLifetimePolicyService>();
 var authenticationOptions = host.Services.GetRequiredService<IOptions<DelegatedAuthenticationOptions>>();

 DelegatedAuthenticationOptions.Validate(authenticationOptions.Value);

 logger.LogInformation("Starting delegated flow for tenant {TenantId}.", authenticationOptions.Value.TenantId);
 logger.LogInformation("Delegated scopes requested: {Scopes}", string.Join(", ", authenticationOptions.Value.RequiredDelegatedScopes));
 await tokenLifetimePolicyService.ApplyPolicyAsync(CancellationToken.None);

Testing the delegated access token

An App registration is setup to use a scope (access_as_user) and this can be requested using the OpenID Connect flow. This flow requires consent. The Azure SDKs provide helper methods for this.

static async Task TestDelegatedTokenPolicy(IHost host, ILogger logger)
{
    // Test token
    var authOptions = host.Services
           .GetRequiredService<IOptions<DelegatedAuthenticationOptions>>().Value;

    var credentialOptions = new InteractiveBrowserCredentialOptions
    {
        ClientId = authOptions.ClientId,
        TenantId = authOptions.TenantId,
        RedirectUri = new Uri("http://localhost"),
    };
    var credential = new InteractiveBrowserCredential(credentialOptions);

    // Request token for the API (Policy only applies to App registrion, not graph)
    var context = new TokenRequestContext(
            ["api://9949e3d8-ffb2-4e86-908a-fd92b6140972/access_as_user"]);

    var response = await credential.GetTokenAsync(context);

    logger.LogInformation("Token acquired UTC: {ExpiresIn}, {Token}",
                response.ExpiresOn, response.Token);
}

Notes

This was really easy to implement using the documentation. The docs implement the examples using Powershell, but this can be easily switched to .NET using any AI coding tool. What is missing is the right permissions and the way to acquire the access token correctly.

Links

https://learn.microsoft.com/en-us/entra/identity-platform/configurable-token-lifetimes

https://learn.microsoft.com/en-us/entra/identity-platform/configure-token-lifetimes



Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Simple QR Code Maker Version 2 is Here and It’s Free Forever

1 Share

I’m thrilled to announce that Simple QR Code Maker Version 2 is now live on the Microsoft Store! This is a massive update that transforms the app from a simple QR code generator into a full workflow tool, for both creating polished QR codes and reading them back from real-world images. And as always, it is completely free: no subscriptions, no in-app purchases, no catches. Free today, free tomorrow, free always.

Download from the Microsoft Store

Whether you need to make a single QR code or a batch of hundreds, customize them with your brand colors and logo, decode codes from a camera or a tricky photo, or import data straight from a spreadsheet, v2 has you covered. Here’s a look at everything that’s new and improved.

Create and Batch QR Codes

  • Generate one QR code or many at once from multiline text
  • Built-in helpers for URLs, Wi-Fi credentials, email messages, and vCards
  • Load source text from .txt and .csv files
  • Import data from .csv and .tsv spreadsheets for large batches
  • Preview spreadsheet rows before import, pick the exact column, and add prefix/suffix text
  • Remove duplicate values during import and optionally write generated IDs back to the source sheet
  • Switch between one-line-per-code and multi-line-as-one-code modes

Customize the Look

  • Set foreground and background colors
  • Sample colors directly from a logo or reference image
  • Add frame presets with custom label text (with smart fallback text where supported)
  • Add center logos from image files or emoji (with style options)
  • Adjust center logo size and padding
  • Remove the background from raster logos on supported devices
  • Save reusable brand presets: colors, content, error correction, logo, and frame settings all in one
  • Apply, edit, delete, and set a default brand

Save, Export, Print, and Share

  • Save as PNG or SVG (or both at once)
  • Export batches as ZIP packages
  • Copy PNG or SVG to the clipboard, or copy raw SVG text
  • Print with configurable page type, layout, margins, spacing, code size, and labels
  • Accept shared text and URLs from the Windows Share UI

Read and Decode QR Codes

  • Decode from image files, drag-and-drop, clipboard, camera, or screenshots
  • Accept shared images from the Windows Share UI
  • Open a whole folder of images and browse every file in the app
  • Batch decode an entire folder and export a .txt result per image
  • Build a folder summary view and export as CSV
  • Copy decoded text, launch decoded links, or send content straight back into the creator

Advanced Decoding and Recovery

Got a difficult code that won’t scan? v2 includes a full suite of image recovery tools:

  • Grayscale conversion, color inversion, and contrast adjustment
  • Sample a black point or white point from the image
  • Add border padding to help detection
  • Select and decode a cut-out region
  • Correct perspective by selecting the four QR corners
  • Manually unwarp difficult codes with corner and alignment points

History, Safety, and Settings

  • Full history of created QR codes and decoded images
  • Warns on likely redirector links, with a safe-domain allowlist you manage
  • Choose whether the app starts in Create or Read mode
  • Set a quick-save location and choose your app theme
  • Export and import a full backup of settings, brands, and history as a ZIP

Always Free

Simple QR Code Maker is, and will always be, completely free. No premium tier, no ads, no subscription. Just download it from the Microsoft Store and use every single feature without spending a cent.

More Posts Coming Soon

Version 2 is packed with features and I want to do each one justice. Over the coming weeks I’ll be publishing dedicated posts that take a deep dive into individual features, from the brand preset system to the advanced perspective-correction recovery tools. Stay tuned!

In the meantime, grab the app from the Microsoft Store and let me know what you think. Feedback and feature requests are always welcome on the GitHub repo HERE.

Joe

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using AWS Locally with MiniStack and .NET

1 Share

Introduction

When using AWS for development, sometimes it is useful to have some sort of emulator, so that we don't incur in costs while doing development and debugging. For that, we have local cloud emulators, and LocalStack used to be the standard one not too long ago; the problem is, its licence changed and it is now generally not free. The good news is that there are alternatives that are free, such as MiniStack (GitHub: MiniStack), a free and open-source AWS cloud emulator that can emulate more than 55 AWS services and that is very similar to LocalStack. On this post I'll be talking about how to set it up so that we can point to it and use it as if it were the real thing with .NET. We will be using Docker Compose to spin up the local environment.

Docker Compose

We want to use the latest MiniStack image, which is made available from https://hub.docker.com/r/ministackorg/ministack. Our docker-compose.yml file will look like this:

services:

  ministack:
    image: ministackorg/ministack:latest
    container_name: ministack
    ports:
      - "4566:4566"
    environment:
      - SERVICES=s3,sqs,ssm
      - AWS_DEFAULT_REGION=eu-west-2
- AWS_ENDPOINT_URL=http:/localhost:4566 volumes: - ./init:/etc/localstack/init/ready.d:ro - /var/run/docker.sock:/var/run/docker.sock healthcheck: test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:4566/_ministack/health')"]
interval: 10s timeout: 3s retries: 3

Noteworthy:

  • We are exposing port 4566 (default port for LocalStack and MiniStack)
  • Services to start are defined using the SERVICES environment variable; in this case, I'm starting S3, SQS, and SSM Parameter Store
  • Setting the AWS_ENDPOINT_URL environment variable to point to the local setup
  • Setting the default region in AWS_DEFAULT_REGION
  • Mounting a local init folder to the container's /etc/localstack/init/ready.d folder as read-only (ro); this is so that we can run an initialisation script (more on this in a minute)
  • For the health check, because the ministackorg/ministack image does not include curl, we use Python with the urlib.request library for making HTTP requests to monitor our container
  • I didn't include any virtual network configuration or anything else, this is just the bare minimum.

For more info on MiniStack configuration, including how to set up durable storage for all of its services, please check out https://ministack.org/docs.

Initialisation

We need to perform some initialisation, like, creating S3 buckets, SQS queues, and reference SSM configuration; by design, MiniStack (also like LocalStack) will run any scripts, ordered by name, that exist on folder /etc/localstack/init/ready.d after it starts, that's why we mapped it to a local folder. Let's create a local folder (on our Windows/Mac/Linux/whatever) called init (the same name as in docker-compose.yml) and inside of it let's create a file called 01-init.sh (the actual name does not matter much, but scripts are sorted by it in alphabetical order) with this content:

#!/bin/sh

# fail in case there are errors (return code not 0) set -euo pipefail # log to both stdout and stderr for visibility in Docker logs echo "Initialising" | tee /dev/stder
# give some time for the services to start sleep 5 # create a bucket in s3 aws s3 mb s3://bucket-name | tee /dev/stder
# create a queue aws sqs create-queue --queue-name queue_name | tee /dev/stder
# set some value in parameter store aws ssm put-parameter --name "parameter_name" --value '{ "foo": "bar" }' --type "String" --overwrite | tee /dev/stder
# all done echo "Initialisation complete" | tee /dev/stderr

Very important: this file needs to be saved using the UNIX line terminator (LF), not the Windows (CR+LF). Also, it must be saved with the UTF-8 encoding!

What this does is:

  • Sets error handling, so that if any command fails, the script aborts with the command's error code
  • Sleeps for a while, to give services time to start
  • Creates an S3 bucket called bucket-name
  • Creates a SQS queue called queue_name
  • Creates a parameter in SSM Parameter Store called parameter_name with some simple JSON
  • All output is also sent to standard error output using tee

For the full list of parameters to aws, please have a look here: https://docs.aws.amazon.com/cli/latest/reference.

Usage

To start our local service, we just call:

docker-compose up

And to test the services, like listing existing S3 buckets:

aws s3 ls

Note: we can skip the --endpoint-url and --region parameters if we have set the AWS_ENDPOINT_URL and AWS_DEFAULT_REGION environment variables.

If we want to use MiniStack with .NET, we can add the following to our appsettings.json file for the default settings:

{
"AWS": {
"Region": "eu-west-2",
"UseMiniStack": false
}
}

And, possibly on appsettings.Development.json, we can have an override for the Development environment to use the local emulator:

{
"AWS": {
"UseMiniStack": true,
"ServiceUrl": "http://localhost:4566"
}
}

This way we know that when UseMiniStack is enabled, the local emulator will be used, together with the ServiceUrl that points to our local emulator.

Now, for actually using this configuration, for example, for accessing S3:

// get config section
var awsSection = builder.Configuration.GetSection("AWS");
var region = awsSection["Region"] ?? "eu-west-2";
var useMiniStack = awsSection.GetValue<bool>("UseMiniStack");
var serviceUrl = awsSection["ServiceUrl"];

// create a config object to be reused by all services
var config = new AmazonS3Config { RegionEndpoint = RegionEndpoint.GetBySystemName(region) }; if (useMiniStack && !string.IsNullOrWhiteSpace(serviceUrl)) { config.ServiceURL = serviceUrl; config.ForcePathStyle = true; config.UseHttp = true; }

// register S3 client
builder.Services.AddSingleton<IAmazonS3>(_ => new AmazonS3Client(config));

I think you got the idea, we just inject IAmazonS3/AmazonS3Client service wherever it is needed and off we go! We will need the AWSSDK.S3 NuGet package for S3, AWSSDK.SQS for SQS, and AWSSDK.SimpleSystemsManagement for SSM Parameter Store. Similar registrations for IAmazonSQS/AmazonSQSClient and IAmazonSimpleSystemsManagement/AmazonSimpleSystemsManagementClient should be trivial.

Conclusion

I find using MiniStack very convenient and easy to use, it is free and full open-source, and it has a great user base. Of course, do keep in mind that you will still have to test this with the real AWS, but for daily development, this should be more than enough. I hope you find this useful, let me hear your thoughts!

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Refactoring an ASP.NET Core API with clean architecture

1 Share

Learn how to refactor an ASP.NET Core API using clean architecture by separating validation, business logic, and database access into clear layers.

The page Refactoring an ASP.NET Core API with clean architecture appeared on Round The Code.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories