This week, we review our 2025 predictions, discuss the big stories, and speculate on 2026. Plus, Coté dives deep into the EU broth market.
Watch the YouTube Live Recording of Episode 553
Photo Credits
Static code review has been part of software engineering for decades. Long before AI entered the workflow, teams relied on static analysis to catch bugs early, enforce standards and reduce obvious risks before code ever ran.
And it still matters.
But as codebases grow and pull request velocity increases, many teams are discovering that static code review alone is no longer enough. It’s a foundation not a complete review strategy.
Static code review is the process of analyzing source code without executing it. The goal is to identify issues by inspecting the structure, syntax, and patterns in the code itself.
Static code analysis reviews typically look for:
Because the code never runs, static analysis is fast, repeatable and easy to automate.
Static code review answers a very specific question:
“Does this code violate known rules or patterns?”
Human reviewers answer a different one:
“Does this change make sense in context?”
Both are important but they operate at different levels.
Static code analysis is excellent at enforcing consistency and catching low-level issues early. It struggles with intent, tradeoffs and system-level reasoning.
That distinction becomes critical as teams scale.
Static code review tools are still essential in modern workflows. Used correctly, they deliver real value.
They are especially good at:
Popular static code review tools integrate directly into CI pipelines and pull requests, making them a reliable first line of defense.
For many teams, static analysis is the baseline non-negotiable and always on.
Static code review tools operate on rules and patterns. That’s also their limitation.
They generally cannot:
This is why teams often experience “alert fatigue.” The tool is technically correct but not always helpful.
As pull requests grow larger and more frequent, static analysis alone can turn reviews into a checklist instead of a conversation.
The most effective teams treat static code review as the first pass, not the final word.
A strong review flow often looks like this:
This is where tools like PRFlow fit naturally.
PRFlow doesn’t replace static code review. It builds on it adding structure, consistency and context-aware review logic so humans don’t have to sift through low-signal feedback.
PRFlow is designed around a simple idea:
Every pull request deserves a clean, predictable starting point.
Static analysis provides raw signals. PRFlow helps turn those signals into a usable review baseline by:
Instead of reviewers repeating the same comments across PRs, they start from a higher-quality foundation.
Despite its limits, static code review isn’t going away and it shouldn’t.
It’s still the fastest way to catch:
What’s changing is how teams use it.
Static analysis is no longer the review. It’s part of the review system.
Static code review is a powerful tool but it works best when paired with systems that understand context and workflow.
As teams grow, the challenge isn’t finding more issues. It’s deciding which issues deserve attention.
Static code review tools provide the signal.PRFlow helps teams act on it consistently, predictably and without friction.
That’s how code reviews scale without losing quality.
Check it out : https://www.graphbit.ai/prflow
1. A guide to choosing the right Apple Watch
Source: https://techcrunch.com/2026/01/01/is-the-apple-watch-se-3-a-good-deal/
Summary: The gap between Apple's standard and budget smart watches has never felt smaller.
2. A beginner’s guide to Mastodon, the open source Twitter alternative
Source: https://techcrunch.com/2026/01/01/what-is-mastodon/
Summary: Unless if you’re really in the know about nascent platforms, you probably didn’t know what Mastodon was until Elon Musk bought Twitter and renamed it X. In the initial aftermath of the acquisition, as users fretted over what direction Twitter would take, millions of users hopped over to Mastodon, a fellow microblogging site. As time […]
3. European banks plan to cut 200,000 jobs as AI takes hold
Source: https://techcrunch.com/2026/01/01/european-banks-plan-to-cut-200000-jobs-as-ai-takes-hold/
Summary: The bloodletting will hit hardest in back-office operations, risk management, and compliance.
4. OpenAI bets big on audio as Silicon Valley declares war on screens
Source: https://techcrunch.com/2026/01/01/openai-bets-big-on-audio-as-silicon-valley-declares-war-on-screens/
Summary: The form factors may differ, but the thesis is the same: audio is the interface of the future. Every space -- your home, your car, even your face -- is becoming an interface.
5. LG’s new karaoke-ready party speaker uses AI to remove song vocals
Source: https://www.theverge.com/news/852362/lg-xboom-stage-501-karaoke-launch-ces-2026
Summary: LG is adding a karaoke-focused party speaker to its lineup of Xboom devices, which is built in collaboration with Will.i.am. Announced this week, LG says the Stage 501 speaker comes with an "AI Karaoke Master" that can remove or adjust vocals from "virtually any song," similar to the Soundcore Rave 3S. It can also adjust […]
6. Public domain 2026: Betty Boop, Pluto, and Nancy Drew set free
Source: https://www.theverge.com/policy/852332/public-domain-2026-betty-boop-nancy-drew-pluto
Summary: Some years ago, I was writing a science fiction short story in which I wanted to incorporate verses from a 1928 song, "Button Up Your Overcoat." However, when I sold the story, my editor told me that since the song was still copyrighted, it was safer not to include the verses. If I had written […]
7. The top 6 media/entertainment startups from Disrupt Startup Battlefield
Source: https://techcrunch.com/2026/01/01/the-top-6-media-entertainment-startups-from-disrupt-startup-battlefield/
Summary: Here is the full list of the media/entertainment Startup Battlefield 200 selectees, along with a note on what made us select them for the competition.
8. Meet the new tech laws of 2026
Source: https://www.theverge.com/policy/851664/new-tech-internet-laws-us-2026-ai-privacy-repair
Summary: As usual, 2025 was a year of deep congressional dysfunction in the US. But state legislatures were passing laws that govern everything from AI to social media to the right to repair. Many of these laws, alongside rules passed in past years, take effect in 2026 - either right now or in the coming months. […]
Automated post via TechCognita Automation Framework
Lots of folks use the Claude IDE, or the Claude Code VSCode extension. Unfortunately, your prompts and completions are used (by default) to train Claude models. [0]
AWS Bedrock, on the other hand, doesn't use your prompts and completions to train any AWS models or give them to 3rd parties. [1]
For these reasons (privacy, data sovereignty) I'm more inclined to use Bedrock as an LLM in my IDE. Today we'll go over how to set up the VSCode Claude Code extension with AWS Bedrock and use the Claude Sonnet 4.5 foundation model.
In order to use the Claude Code VSCode extension with Bedrock and the Claude Sonnet 4.5 model, we need to perform these tasks:
To do this, we'll use Terraform to create our AWS IAM user and policies (but you could use the AWS console.)
Then we'll integrate this all together and verify it works as expected.
AWS Bedrock enables generating short lived API tokens via their SDK. [2] Claude Code does support two methods for automatic AWS credential refresh, but Bedrock API tokens is not one of them.
If it did support this feature it would be the best solution from a security perspective. Tokens would expire in 12 hours and when expired and the extension is used it would automatically refresh them.
Instead it only supports AWS SSO (or rather AWS Identity Center) for it's awsAuthRefresh option, or AWS IAM credentials for its awsCredentialsExport refresh methods. [3] This deficiency is a poor decision, or at least an oversight by the Claude Code development team.
Unfortunately, a more egregious issue is that they claim the above awsCredentialsExport refresh method is functional when it is not. Whether it's a regression or bug, or maybe never worked, I couldn't get it working within a couple hours. (Including time spent conversing with the Claude Code VSCode extension using a working AWS Profile and it couldn't suggest a workaround to this problem.) In addition to all these setbacks, using the Claude Code settings file (~/.claude/settings.json) didn't work either so I have to use the VSCode settings file to set all Claude Code extension configuration options.
Since using AWS Identity Center for a personal account is overkill, refreshing Bedrock API tokens in Claude Code is not supported, and the AWS credential export method to automatically refresh my credentials for the Claude Code VSCode extension is not functional, I'll settle for the lowest common denominator and use the AWS Profile in my configuration. I don't love this method because it uses a long lived AWS IAM user credential (access key and secret access key.) But I can't improve security due to the poor state of affairs in the Claude Code VSCode extension.
Create an IAM user and attach IAM policy to the user with this Terraform:
resource "aws_iam_user" "bedrock_user" {
name = "bedrock-user"
}
resource "aws_iam_access_key" "bedrock" {
user = aws_iam_user.bedrock_user.name
}
data "aws_iam_policy_document" "bedrock" {
statement {
effect = "Allow"
actions = [
"bedrock:InvokeModel",
"bedrock:ListFoundationModels",
"bedrock:ListInferenceProfiles",
"bedrock:InvokeModelWithResponseStream",
# "bedrock:CallWithBearerToken" # required if using Bedrock API token which we're not doing here
]
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"aws-marketplace:ViewSubscriptions",
"aws-marketplace:Subscribe"
]
resources = ["*"]
condition {
test = "StringEquals"
variable = "aws:CalledViaLast"
values = ["bedrock.amazonaws.com"]
}
}
}
resource "aws_iam_user_policy" "bedrock" {
name = "bedrock"
user = aws_iam_user.bedrock_user.name
policy = data.aws_iam_policy_document.bedrock.json
}
I typically use aws-vault to manage my AWS credentials, for enhanced security and to use short lived credentials. But again we're going to have to use the standard AWS method of storing long lived credentials in our ~/.aws/credentials file, and then access that profile from the Claude Code extension.
Above in the IAM user creation section, we only created the IAM user and policy. Now you will need to create an Access Key to use as the credential in your profile. Go to the AWS Console and create an access key for this user, follow the instructions to create it for the CLI use case.
After creating it, copy your Access Key and Secret Access Key somewhere for safe keeping (I use a password manager for this purpose.)
To setup your profile in the ~/.aws/credentials file, use the command:
aws configure --profile YOUR_AWS_PROFILE_NAME
It will prompt you to provide the AWS Access Key and Secret Access Key, which will be added to the ~/.aws/credentials file using the profile name you specified in the command (choose wisely!)
After creating the above IAM user and policy, and setting up the profile, we'll login to the AWS console. Choose the AWS region that you normally use, but be aware that these Bedrock foundation models aren't available in every single global AWS region. I used region us-west-2 but us-east-1 and us-east-2 are also supported.
Anthropic requires first-time customers to submit use case details before invoking a model once per account or once at the organization's management account. [4] This is a rather antiquated policy in the cloud era, but required nonetheless.
Go to the Bedrock section of the AWS console, then to "Chat/Text Playground" and select Anthropic Claude Sonnet 4.5. You'll be presented with a dialog to fill out and enable the foundation model.
Install the VSCode extension:
code --install-extension anthropic.claude-code
Identify your Bedrock Claude Sonnet 4.5 inference profile ARN:
aws bedrock list-inference-profiles --region us-west-2 --profile YOUR_AWS_PROFILE_NAME --no-cli-pager | jq '.inferenceProfileSummaries | .[] | select(.inferenceProfileId | match("us.anthropic.claude-sonnet-4-5-20250929-v1:0")) | .inferenceProfileArn'
NOTE: This assumes you're in the US, if using another global region use the global Anthropic Claude Sonnet profile name in the above command.
Add the following to your VSCode user settings.json file (usually this is ~/.config/Code/User/settings.json):
{
"claudeCode.selectedModel": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"claudeCode.environmentVariables": [
{
"name": "AWS_PROFILE",
"value": "YOUR_AWS_PROFILE_NAME"
},
{
"name": "AWS_REGION",
"value": "YOUR_AWS_REGION_FROM_ABOVE"
},
{
"name": "BEDROCK_MODEL_ID",
"value": "INFERENCE_PROFILE_ARN_FROM_ABOVE"
},
{
"name": "CLAUDE_CODE_USE_BEDROCK",
"value": "1"
}
],
"claudeCode.disableLoginPrompt": true,
}
You'll probably have to restart your VSCode session if it was running. Then, open the Claude window and type a question or request and you should see a successful response like below:
We did it! I'm not super impressed with the missing/broken/overlooked features of the Claude Code VSCode extension related to AWS IAM credentials. But it works fine for the time being and I'll revisit this issue and report back when these issues are resolved and we can all begin using short lived credentials with the extension.
I borrowed some information from this incredibly detailed blog post by Vasko Kelkocev. [5]
But even though that blog was written in October 2025, it was already out of date by the time I found it. I had to add more IAM permissions to get the extension to work with Bedrock (specifically bedrock:InvokeModelWithResponseStream), and there were some other issues with the configuration I had to play with. Thanks for the great blog Vasko.