Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
133041 stories
·
29 followers

Bluesky 2024 Moderation Report

1 Share

2024 was a year of immense growth for Bluesky. We launched the app publicly in February and gained over 23M users by the end of the year. With this growth came anticipated challenges in scaling Trust & Safety, from adding workstreams to adapting to new harms.

Throughout 2024, our Trust & Safety team has worked to protect our growing userbase and uphold our community standards. Our approach has focused on assessing potential harms based on both their frequency and severity, allowing us to direct our resources to where they can have the greatest impact. Looking ahead to 2025, we're investing in stronger proactive detection systems to complement user reporting, as a growing network needs multiple detection methods to rapidly identify and address harmful content. In Q1, we'll be sharing a draft of updated Guidelines as we continue adapting to our community’s needs.

Overview

In 2024, Bluesky grew from 2.89M users to 25.94M users. In addition to users hosted on Bluesky’s infrastructure, there are over 4,000 users running their own infrastructure (Personal Data Servers), self-hosting their content, posts, and data.

To meet the demands caused by user growth, we’ve increased our moderation team to roughly 100 moderators and continue to hire more staff. Some moderators specialize in particular policy areas, such as dedicated agents for child safety. Our moderation team is staffed 24/7 and reviews user reports around the clock. This is a tough job, as moderators are consistently exposed to graphic content. At the start of September 2024, we began providing psychological counselling to alleviate the burden of viewing this content.

Reports

In 2024, users submitted 6.48M reports to Bluesky’s moderation service. That’s a 17x increase from the previous year — in 2023, users submitted 358K reports total. The volume of user reports increased with user growth and was non-linear, as the graph of report volume below shows:

2024 report volume
Report volume in 2024

In late August, there was a large increase in user growth for Bluesky from Brazil, and we saw spikes of up to 50k reports per day. Prior to this, our moderation team handled most reports within 40 minutes. For the first time in 2024, we now had a backlog in moderation reports. To address this, we increased the size of our Portuguese-language moderation team, added constant moderation sweeps and automated tooling for high-risk areas such as child safety, and hired moderators through an external contracting vendor for the first time.

We already had automated spam detection in place, and after this wave of growth in Brazil, we began investing in automating more categories of reports so that our moderation team would be able to review suspicious or problematic content rapidly. In December, we were able to review our first wave of automated reports for content categories like impersonation. This dropped processing time for high-certainty accounts to within seconds of receiving a report, though it also caused some false positives. We’re now exploring the expansion of this tooling to other policy areas. Even while instituting automation tooling to reduce our response time, human moderators are still kept in the loop — all appeals and false positives are reviewed by human moderators.

Some more statistics: The proportion of users submitting reports held fairly stable from 2023 to 2024. In 2023, 5.6% of our active users1 created one or more reports. In 2024, 1.19M users made one or more reports, approximately 4.57% of our user base.

In 2023, 3.4% of our active users received one or more reports. In 2024, the number of users who received a report were 770K, comprising 2.97% of our user base.

The majority of reports were of individual posts, with a total of 3.5M reports. This was followed by account profiles at 47K reports, typically for a violative profile picture or banner photo. Lists received 45K reports. DMs received 17.7K reports. Significantly lower are feeds at 5.3K reports, and starter packs with 1.9K reports.

Our users report content for a variety of reasons, and these reports help guide our focus areas. Below is a summary of the reports we received, categorized by the reasons users selected. The categories vary slightly depending on whether a report is about an account or a specific post, but here’s the full breakdown:

  • Anti-social Behavior: Reports of harassment, trolling, or intolerance – 1.75M
  • Misleading Content: Includes impersonation, misinformation, or false claims about identity or affiliations – 1.20M
  • Spam: Excessive mentions, replies, or repetitive content – 1.40M
  • Unwanted Sexual Content: Nudity or adult content not properly labeled – 630K
  • Illegal or Urgent Issues: Clear violations of the law or our terms of service – 933K
  • Other: Issues that don’t fit into the above categories – 726K

These insights highlight areas where we need to focus more attention as we prioritize improvements in 2025.

Labels

In 2024, Bluesky applied 5.5M labels, which includes individual post labels and account-level labels. To give an idea of volumes, in Nov 2024, 2.5M videos were posted on Bluesky2, along 36.14M images. This comes primarily from automation where every image as well as frames from each video is sent to a provider for assessment, they return verdicts that match to our specific labels, and those are the ones you see from Bluesky Moderation. None of the images or videos are retained by the vendor or used for training generative AI systems. In June 2024, we analyzed the effectiveness of this system and concluded that the overall system was 99.90% accurate (i.e. it labeled the right things with the right labels). Human moderators review all appeals on labels, but due to backlogs, there are currently delays.

The top human-applied labels were:

  • Sexual-figurative3 - 55,422
  • Rude - 22,412
  • Spam - 13,201
  • Intolerant - 11,341
  • Threat - 3,046

Appeals

In 2024, 93,076 users submitted at least one appeal in the app, for a total of 205K individual appeals. For most cases, the appeal was due to disagreement with label verdicts.

We currently handle user appeals for taken-down accounts via our moderation email inbox.

In 2025, we will transition to responding to moderation reports directly within the Bluesky app, which will streamline user communication. For example, we’ll be able to report back to users what action was taken on their reports. Additionally, in a future iteration, it will be possible for users to appeal account takedowns directly within the Bluesky app instead of having to send us an email.

Takedowns

In 2024, Bluesky moderators took down 66,308 accounts, and automated tooling took down 35,842 accounts for reasons such as spam and bot networks. Mods took down 6,334 records (posts, lists, feed etc.) while automated systems removed 282.

This month (January 2025), we’ve already built in policy reasons to Ozone, our open-source moderation tool. This will give us more granular data on the takedown rationale moving forward.

Legal Requests

In 2024, we received 238 requests from law enforcement, governments, legal firms, responded to 182, and complied with 146. The majority of requests came from German, U.S., Brazilian, and Japanese law enforcement.

In the chart below, User Data Requests are requests for user data; Data Preservation Requests are requests for Bluesky to store user data pending legal authorization to transfer the data; Emergency Data Requests are extreme cases where there’s a threat to life (i.e., someone actively discussing their own suicide with a time and date given for an attempt); Takedown Requests are requests for content removal; Subpoena Requests are largely user data requests, but if Bluesky fails to provide the data in a timely manner, it has to physically show up in court to defend not providing the data.

Type of Request Requests Responded
User Data Request 111 87
Data Preservation Request 8 8
Emergency Data Request 13 12
Takedown Request 45 22
Inquiry 44 36
Subpoena 17 17
Totals 238 182

The demand for legal requests peaked between Sept and Dec 2024.

Legal requests in 2024
Legal requests in 2024

Copyright / Trademark

In 2024, we received a total of 937 copyright and trademark cases. There were four confirmed copyright cases in the entire first half of 2024, and this number increased to 160 in September. The vast majority of cases occurred between September to December.

We published a copyright form in late 2024, which provided more structure to our report responses. The influx of users from Brazil meant that professional copyright companies began scraping Bluesky data and sending us copyright claims. With the move to a structured copyright form, we expect to have more granular data in 2025.

Child Safety

We subscribe to a number of hashes (digital fingerprints) that match known cases of Child Sexual Abuse Material (CSAM). When an image or video is uploaded to Bluesky and matches one of these hashes, it is immediately removed from the site and our infrastructure without the need for a human to view the content.

In 2024, Bluesky submitted 1,154 reports for confirmed CSAM to the National Centre for Missing and Exploited Children (NCMEC). Reports consist of the account details, along with manually reviewed media by one of our specialized child safety moderators. Each report can involve many pieces of media, though most reports involve under five pieces of media.

CSAM is a serious issue on any social network. With surges in user growth also came increased complexity in child safety. Cases included accounts attempting to sell CSAM by linking off-platform, potentially underage users trying to sell explicit imagery, and pedophiles attempting to share encrypted chat links. In these cases, we rapidly updated our internal guidance to our moderation team to ensure prompt response times in taking down this activity.

To read more about how Bluesky handles child safety, you can find a co-published blog post on Thorn’s website.

Footnotes

  1. users that have an account that hasn’t been suspended or deleted

  2. Not inclusive of GIFs through Tenor

  3. We haven’t managed to accurately separate figurative images from the rest in terms of our automated labelling as yet, so these are usually users who are appealing the automation applied labels on their art

Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete

The need to invest in AI skills in schools

1 Share

Earlier this week, the UK Government published its AI Opportunities Action Plan, which sets out an ambitious vision to maintain the UK’s position as a global leader in artificial intelligence. 

Whether you’re from the UK or not, it’s a good read, setting out the opportunities and challenges facing any country that aspires to lead the world in the development and application of AI technologies. 

In terms of skills, the Action Plan highlights the need for the UK to train tens of thousands more AI professionals by 2030 and sets out important goals to expand education pathways into AI, invest in new undergraduate and master’s scholarships, tackle the lack of diversity in the sector, and ensure that the lifelong skills agenda focuses on AI skills. 

Photo of a group of young people working through some Experience AI content.

This is all very important, but the Action Plan fails to mention what I think is one of the most important investments we need to make, which is in schools. 

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

While reading the section of the Action Plan that dealt with AI skills, I was reminded of this quote attributed to Bill Gates, which was adapted from Roy Amara’s law of technology. We tend to overestimate what we can achieve in the short term and underestimate what we can achieve in the long term. 

In focusing on the immediate AI gold rush, there is a risk that the government overlooks the investments we need to make right now in schools, which will yield huge returns — for individuals, communities, and economies — over the long term. Realising the full potential of a future where AI technologies are ubiquitous requires genuinely long-term thinking, which isn’t always easy for political systems that are designed around short-term results. 

Photo focused on a young person working on a computer in a classroom.

But what are those investments? The Action Plan rightly points out that the first step for the government is to accurately assess the size of the skills gap. As part of that work, we need to figure out what needs to change in the school system to build a genuinely diverse and broad pipeline of young people with AI skills. The good news is that we’ve already made a lot of progress. 

AI literacy

Over the past three years, the Raspberry Pi Foundation and our colleagues in the Raspberry Pi Computing Education Research Centre at the University of Cambridge have been working to understand and define what AI literacy means. That led us to create a research-informed model for AI literacy that unpacks the concepts and knowledge that constitute a foundational understanding of AI. 

In partnership with one of the leading UK-based AI companies, Google DeepMind, we used that model to create Experience AI. This suite of classroom resources, teacher professional development, and hands-on practical activities enables non-specialist teachers to deliver engaging lessons that help young people build that foundational understanding of AI technologies. 

We’ve seen huge demand from UK schools already, with thousands of lessons taught in UK schools, and we’re delighted to be working with Parent Zone to support a wider roll out in the UK, along with free teacher professional development.  

CEO Philip Colligan and  Prime Minister Keir Starmer at the UK launch of Experience AI.
CEO Philip Colligan and Prime Minister Keir Starmer at the UK launch of Experience AI.

With the generous support of Google.org, we are working with a global network of education partners — from Nigeria to Nepal — to localise and translate these resources, and deliver locally organised teacher professional development. With over 1 million young people reached already, Experience AI can plausibly claim to be the most widely used AI literacy curriculum in the world, and we’re improving it all the time. 

All of the materials are available for anyone to use and can be found on the Experience AI website.

There is no AI without CS

With the CEO of GitHub claiming that it won’t be long before 80% of code is written by AI, it’s perhaps not surprising that some people are questioning whether we still need to teach kids how to code.

I’ll have much more to say on this in a future blog post, but the short answer is that computer science and programming is set to become more — not less — important in the age of AI. This is particularly important if we want to tackle the lack of diversity in the tech sector and ensure that young people from all backgrounds have the opportunity to shape the AI-enabled future that they will be living in. 

Close up of two young people working at a computer.

The simple truth is that there is no artificial intelligence without computer science. The rapid advances in AI are likely to increase the range of problems that can be solved by technology, creating demand for more complex software, which in turn will create demand for more programmers with increasingly sophisticated and complex skills. 

That’s why we’ve set ourselves the ambition that we will inspire 10 million more young people to learn how to get creative with technology over the next 10 years through Code Club. 

Curriculum reform 

But we also need to think about what needs to change in the curriculum to ensure that schools are equipping young people with the skills and knowledge they need to thrive in an AI-powered world. 

That will mean changes to the computer science curriculum, providing different pathways that reflect young people’s interests and passions, but ensuring that every child leaves school with a qualification in computer science or applied digital skills. 

It’s not just computer science courses. We need to modernise mathematics and figure out what a data science curriculum looks like (and where it fits). We also need to recognise that AI skills are just as relevant to biology, geography, and languages as they are to computer science. 

A teacher assisting a young person with a coding project.

To be clear, I am not talking about how AI technologies will save teachers time, transform assessments, or be used by students to write essays. I am talking about the fundamentals of the subjects themselves and how AI technologies are revolutionising the sciences and humanities in practice in the real world. 

These are all areas where the Raspberry Pi Foundation is engaged in original research and experimentation. Stay tuned. 

Supporting teachers

All of this needs to be underpinned by a commitment to supporting teachers, including through funding and time to engage in meaningful professional development. This is probably the biggest challenge for policy makers at a time when budgets are under so much pressure. 

For any nation to plausibly claim that it has an Action Plan to be an AI superpower, it needs to recognise the importance of making the long-term investment in supporting our teachers to develop the skills and confidence to teach students about AI and the role that it will play in their lives. 

I’d love to hear what you think and if you want to get involved, please get in touch.

The post The need to invest in AI skills in schools appeared first on Raspberry Pi Foundation.

Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete

Building Restful APIs with Deno and Oak

1 Share

With a simple book API, learn how to define middleware in Oak, handle request validation, create route handlers and perform basic DB operations with DenoKV.

This article will guide you through creating a REST API using Deno, the Oak framework and a DenoKV database. We will build a simple book API that shows the different ways to define middleware in Oak, handle request validation, create route handlers and perform basic database operations with DenoKV.

What Is Deno?

Deno fixes many problems developers face with Node. Its straightforward approach helps developers write more secure, efficient, modern JavaScript code. One of its major selling points is security. By default, Deno does not allow access to the file system, network or environment variables unless explicitly allowed by the developer.

Deno also gives developers native TypeScript support without needing additional configuration.

What Is Oak?

Oak is a middleware framework for Deno that helps developers build web apps and APIs. It provides tools for handling HTTP requests, managing routing and integrating middleware, similar to Express in Node.js. It comes with TypeScript right out of the box and benefits from Deno’s security and modern runtime environment.

What Is DenoKV?

DenoKV is a key-value database that manages structured data for Deno.
Each piece of data or “value” has a unique identifier or “key,” making it easy to fetch data by referencing its key. This approach allows developers to manage data without setting up a separate database server.

Project Setup

Run the following command to install deno for macOS using Shell:

curl -fsSL https://deno.land/install.sh | sh

For Windows using PowerShell:

irm https://deno.land/install.ps1 | iex

For Linux using Shell:

curl -fsSL https://deno.land/install.sh | sh

To test your installation, run the following command:

 deno -version

Create New Project

Next, we need to create a new project. Run the command deno init deno-oak-demo to create a new project called deno-oak-demo, then cd into it.

Initializing a new Deno project using bash

Next, we need to create three new files in the deno-oak-demo directory called book.routes.ts, book.types.ts and validation.ts.

Your deno-oak-demo directory should now look like this.

Project directory

Install Deno VS Code Extension

We need to install Deno’s official VS Code extension. This extension adds support for Deno, such as offering import suggestions and automatically caching remote modules.

Deno’s official VSCode extension

Install Oak

We’ll use JSR to install Oak. JSR, or JavaScript Repository, is a package registry designed by the creators of Deno. It allows developers to publish their TypeScript code directly without the need to transpile it first. Its key advantage is that it supports ES Modules only and is TypeScript-first.

We’ll use the command deno add jsr:**@oak**/oak to install the Oak framework as a dependency. If this is successful, a deno.lock file will be created.

The deno.lock file helps prevent unexpected changes in dependencies during the life of your application by locking specific dependency versions.

Your deno.json file should now look like this.

Image showing deno.json file

When we install a package in Deno, it is added to the deno.json file as an import. If we want to import this package into a file, we can use the alias defined in the deno.json or directly reference the package’s URL.

Registering Middleware in Oak

Middleware functions are processing layers that handle requests and responses in our application.

An Oak application is a chain of various middleware functions, such as route handlers, validation functions, custom error handlers and loggers. We can register a middleware on the application as a whole, a group of routes(a router) or a specific route.

This is how to register a middleware on the Application as a whole:

import { Application, Context, Next } from "@oak/oak";
const app = new Application();
const firstMiddleware = async (context: Context, next: Next) => {
  console.log("Running first Middleware");
  await next();
};
app.use(firstMiddleware);
app.use((context) => {
  console.log("Running second Middleware");
  context.response.body = "Hello world!";
});

await app.listen({ port: 3000 });

In the example above, we first define a middleware function and then register it to the application as a whole using the app.use method.

Notice that the middleware function has two parameters, context and next. context is an object used to access the request data and response methods, while next is a function used to call the next middleware in the chain.

In the example above, the first middleware will be called, thereby logging “Running first Middleware” to the console. This will be followed by the second middleware, which logs “Running second Middleware” to the console and returns “Hello world!” in the response body.

It’s important to note that the order in which middleware is registered is important.

Take a look at the code snippet below.

import { Application, Context, Next } from "@oak/oak";
const app = new Application();
app.use((context) => {
  console.log("Running second Middleware");
  context.response.body = "Hello world!";
});
const firstMiddleware = async (context: Context, next: Next) => {
  console.log("Running first Middleware");
  await next();
};
app.use(firstMiddleware);

await app.listen({ port: 3000 });

In this example, only “Running second Middleware” will be logged to the console, and “Hello world” will be sent as the response body. This is because after running the first middleware that appears in the chain, we didn’t add a next() function call.

This is how to register a middleware on a router:

import { Application, Context, Next, Router } from "@oak/oak";
const app = new Application();

const router = new Router();
router.prefix("/greeting");

const firstMiddleware = async (context: Context, next: Next) => {
  console.log("Running first Middleware");
  await next();
};
router.use(firstMiddleware);
router.get("/one", (context) => {
  context.response.body = "Hello, World!";
});
router.get("/two", (context) => {
  context.response.body = "What's Up, World!";
});

app.use(router.routes());
app.use(router.allowedMethods());

await app.listen({ port: 3000 });

In the above example, we define a new router, set the prefix for all its routes, register a middleware function and create two specific route handlers.

Since our middleware function is registered above the two route handlers, if either “/greeting/one” or “/greeting/two” is hit, our middleware function will run before the greeting is sent as the response body.

This is how to register a middleware on a specific route.

import { Application, Context, Next, Router } from "@oak/oak";
const app = new Application();

const router = new Router();
router.prefix("/greeting");

const firstMiddleware = async (context: Context, next: Next) => {
  console.log("Running first Middleware");
  await next();
};
router.get("/one", firstMiddleware, (context) => {
  context.response.body = "Hello, World!";
});
router.get("/two", (context) => {
  context.response.body = "What's Up, World!";
});

app.use(router.routes());
app.use(router.allowedMethods());

await app.listen({ port: 3000 });

In the above example, our middleware function will only run when “greeting/one” is hit.

Request Validation

Now that we know how middleware works in Oak, let’s build our REST API. First, we’ll create a middleware function to handle request validation. We’ll use this function later when adding route handlers to create and update books.

Deno supports importing npm packages using the npm: specifier. We’ll import a package called Joi from NPM.

Add the following code to your validation.ts file:

import { Context, Next } from "@oak/oak";
import Joi, { ObjectSchema } from "npm:joi";

export const createBookSchema = Joi.object({
  title: Joi.string().required(),
  author: Joi.string().required(),
  description: Joi.string().required(),
});

export const updateBookSchema = Joi.object({
  title: Joi.string().optional(),
  author: Joi.string().optional(),
  description: Joi.string().optional(),
}).or("title", "author", "description");

export const validate =
  (schema: ObjectSchema) => async (context: Context, next: Next) => {
    const body = await context.request.body.json();
    const { error } = schema.validate(body, { abortEarly: false });

    if (error) {
      context.response.status = 400;
      context.response.body = {
        errors: error.details.map((d) => d.message),
      };
    } else {
      await next();
    }
  };

Defining Types

Next, in the book.types.ts file, let’s define a Book type with an id, title, author and description.

export interface Book {
  id: string;
  title: string;
  author: string;
  description: string;
}

Configure Book Router

Next, let’s import the Oak Router, Book interface, createBookSchema and updateBookSchema into the book.routes.ts file:

import { Router } from "@oak/oak/router";
import type { Book } from "./book.types.ts";
import { createBookSchema, updateBookSchema, validate } from "./validation.ts";

Next, initialize the DenoKV database, create a bookRouter and set its prefix to “/books”:

const kv = await Deno.openKv();
const bookRouter = new Router();
bookRouter.prefix("/books");

Next, create a helper function to get a book by its ID:

async function getBookById(id: string) {
  const entry = await kv.get(["books", id]);
  return entry.value as Book | null;
}

In the snippet above, kv.get takes an array with two strings: one represents the namespace “books” and the other represents the key for the specific book to be retrieved.

Next, let’s define the route handler to get a book by ID:

bookRouter.get("/:id", async (context) => {
  try {
    const id = context.params.id;
    const book = await getBookById(id);

    if (book) {
      context.response.body = book;
    } else {
      context.response.status = 404;
      context.response.body = { message: "Book not found" };
    }
  } catch (error) {
    console.log(error);
    context.response.status = 500;
    context.response.body = { message: "Failed to retrieve book" };
  }
});

Next, let’s add a route handler to get all books:

bookRouter.get("/", async (context) => {
  try {
    const entries = kv.list({ prefix: ["books"] });
    const books: Book[] = [];

    for await (const entry of entries) {
      books.push(entry.value as Book);
    }

    context.response.body = books;
  } catch (error) {
    console.log(error);
    context.response.status = 500;
    context.response.body = { message: "Failed to fetch books" };
  }
});

In the snippet above, kv.list retrieves all key-value pairs that share a common prefix (namespace).

Next, add the route handler to create a new book:

bookRouter.post("/", validate(createBookSchema), async (context) => {
  try {
    const body = await context.request.body.json();

    const uuid = crypto.randomUUID();
    const newBook: Book = { id: uuid, ...body };

    await kv.set(["books", uuid], newBook);

    context.response.status = 201;
    context.response.body = { message: "Book added", book: newBook };
  } catch (error) {
    console.log(error);
    context.response.status = 500;
    context.response.body = { message: "Failed to add book" };
  }
});

In the snippet above, kv.set can be used to save a new key-value pair in the database. In this case, it takes two parameters: an array with two strings (the namespace “books” and the key uuid) and the value to be saved (newBook).

Next, let’s add the route handler to update a book by ID:

bookRouter.patch("/:id", validate(updateBookSchema), async (context) => {
  try {
    const id = context.params.id;
    const existingBook = await getBookById(id);

    if (!existingBook) {
      context.response.status = 404;
      context.response.body = { message: "Book not found" };
      return;
    }

    const body = await context.request.body.json();

    const updatedBook = { ...existingBook, ...body };

    await kv.set(["books", id], updatedBook);
    context.response.status = 200;
    context.response.body = { message: "Book updated", book: updatedBook };
  } catch (error) {
    console.log(error);
    context.response.status = 500;
    context.response.body = { message: "Failed to update book" };
  }
});

In the snippet above, kv.set can also be used to update the value of a key-value pair. In this case, it takes two arguments:

  1. An array with two strings: the namespace “books” and the key whose value will be the updated ID.
  2. The new value to be updated (updatedBook).

Finally, let’s add the route handler to delete a book by ID and export bookRouter:

bookRouter.delete("/:id", async (context) => {
  try {
    const id = context.params.id;
    const book = await getBookById(id);

    if (!book) {
      context.response.status = 404;
      context.response.body = { message: "Book not found" };
      return;
    }

    await kv.delete(["books", id]);
    context.response.status = 200;
    context.response.body = { message: "Book deleted", book };
  } catch (error) {
    console.log(error);
    context.response.status = 500;
    context.response.body = { message: "Failed to delete book" };
  }
});

export { bookRouter };

In the snippet above, kv.delete is used to delete a given key-value pair.

Initialize Oak Application

Replace the code in your main.ts file with the following:

import { Application } from "@oak/oak/application";
import { bookRouter } from "./book.routes.ts";

const app = new Application();

app.use(bookRouter.routes());
app.use(bookRouter.allowedMethods());

app.listen({ port: 3000 });

Finally, we need to make a small change to our deno.json file to run our app. Add the --unstable-kv and --allow-net flags to dev task.

Aside from your version of Oak and the assert library, your deno.json should now look like this.

{
"tasks": {
"dev": "deno run --watch --unstable-kv --allow-net main.ts"
},
"imports": {
"@oak/oak": "jsr:@oak/oak@^17.1.3",
"@std/assert": "jsr:@std/assert@1"
}
}

We added the --stable-kv flag because DenoKV is still an unstable API. We also added the --allow-net flag to grant main.ts access to the internet.

With this in place, we can start our app by running the command deno run dev.

Conclusion

In this guide, we built a simple book API that shows how to define middleware in Oak, handle request validation, create route handlers and perform basic DB operations with DenoKV.

Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete

What is SQL Database in Microsoft Fabric?

1 Share

Problem We are currently evaluating Microsoft Fabric to see if it fits our needs. It seems the warehouse might be a bit overkill as it’s more geared towards…

The post What is SQL Database in Microsoft Fabric? appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete

Mark Zuckerberg Turns His Back on the Media

1 Share
The Meta CEO is abandoning his commitment to the truth in favor of a Trump-style playbook.
Read the whole story
alvinashcraft
8 hours ago
reply
West Grove, PA
Share this story
Delete

AI Agents Are About To Blow Up the Business Process Layer

1 Share
Technology background of energy sparks.

When people think of generative AI, many envision it working as part of a “system of engagement” — a customer service agent, a supply chain management tool or a way to intelligently interact with and search an organization’s PDFs and other proprietary data.

That’s an accurate view: For the next year or two, applications that intelligently create content by leveraging large language models (LLMs) will remain a primary AI focus for enterprises.

But consider this: Most of the code written at enterprises sits within business processes — systems like inventory planning, which sit between the engagement layer and the more rigid systems of record (an organization’s data, etc.) How will GenAI benefit that layer? How can an organization improve its business processes with this pervasive technology?

A classic enterprise architecture

A classic enterprise architecture.

Agentic AI is the answer. While AI agents are built to do specific tasks or automate specific, often-repetitive tasks (like updating your calendar), they generally require human input. Agentic AI is all about autonomy (think self-driving cars), employing a system of agents to constantly adapt to dynamic environments and independently create, execute and optimize results.

When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems.

Let’s take a look at why integrating AI agents into enterprise architectures marks a transformative leap in the way organizations approach automation and business processes, and what kind of platform is required to support these systems of automation.

What Agents Are Doing Now

When you provide an agent with context, the agent then feeds that context to an LLM and asks it to complete and respond to it. AI agents can also use capabilities to complete tasks on the behalf of users. These AI agents can perform several key functions guided by instructions and information derived from context:

  • Tool use: The agent uses external functions, APIs or tools to extend its capabilities and perform specific tasks. This can include calling predefined functions or interfacing with external services (like making web requests using cURL or accessing RESTful APIs) to obtain context or execute actions beyond its inherent functionalities.
  • Decision-making: The agent evaluates available information and selects the most appropriate action to achieve its goals. This involves analyzing context, weighing possible outcomes and choosing a course of action that aligns with the desired objectives.
  • Planning: The agent formulates a sequence of actions or strategies to achieve a specific goal.
  • Reasoning: The agent analyzes available context, draws conclusions, predicts outcomes of actions and makes informed decisions about the optimal steps to take to reach the desired outcome.

These latter kinds of functions — decision-making, planning and reasoning — often involve multiple agents working together toward a goal. The agents could seek to refine generated code for correctness, debate whether an agentic decision is biased or plan the use of other agentic capabilities to complete a task.

Orchestrating an Agentic Network

Models that power networks of agents are essentially stateless functions that take context as an input and output a response, so some kind of framework is necessary to orchestrate them. Part of that orchestration could be simple refinements (for example, having the model request more information). This might sound analogous to retrieval-augmented generation (RAG) — and it should, because RAG is essentially a simplified form of agent architecture: It provides the model with a single tool that accesses additional information, often from a vector database.

But multiagentic model frameworks take this further: They broker requests for additional information or provide a response that’s designed to be fed to another agent for refinements.

Frameworks enable agents to work in concert.

Frameworks enable agents to work in concert.

For example, one agent could write some Python while another then reviews it. Or an agent could express a goal or idea, and then a second agent’s job could be to break that up into a set of tasks, or review the idea to find problems that the first agent can review and then refine the idea. Your results start to get better and better.

OK, But How Do I Build Agentic AI?

In the near future, a lot of software engineers will become agentic process authors. They’ll build these processes by mixing and matching components — models, user input, goals — and critical business services.

An example of one of these components is the stock in an inventory management system. What if you connected that system to an agent that could help optimize inventory levels during the holiday season? With the help of another agent that has done some historical inventory level analysis, you could ensure that there is just enough inventory to meet seasonal demand yet leave little inventory after the holiday rush. This might disappoint fervent after-Christmas sale shoppers, but it would also help prevent retailers from selling their wares at a loss.

But how will developers build these systems?

Agentic processes can, of course, be expressed with code, but it also helps to visualize them as “agentic flows” — one agent’s output becomes the input of another agent, and so on. Tools available now are already providing a lot of value in this effort to simplify building agentic systems. One such solution is Langflow, a visual, low-code builder for creating agentic AI applications and complex AI workflows by dragging and dropping different components, without the need for much coding.

Agentic “flows” help to automate business processes.

Agentic “flows” help to automate business processes.

Langflow enables developers to define anything as a tool, including components like a prompt, data source, model, APIs, tools or any other agents. We’ve recently seen significant demand for building “flows” with agents, as developers are creating a lot of applications that include several multiagent capabilities. Agents are the most popular type of component that developers are inserting into flows with Langflow.

Wrapping Up: From Copilot to Pilot

Agentic workflows bring together enterprise data, AI and APIs, forming the systems of automation that empower domain experts to scale their abilities and make enterprises work better through AI. Integrating AI agents into enterprise architectures marks a big leap in how organizations approach automation and business processes. These agents, empowered by LLMs and agentic frameworks, transcend traditional boundaries by seamlessly operating across processes, workflows and code.

How AI can transform enterprise architecture

How AI can transform enterprise architecture.

Adopting agentic workflows promises to enhance efficiency, scalability and responsiveness across business operations. They’ll manage entire workflows, handle complex tasks with greater adaptability and improve customer experiences by providing more personalized and timely interactions. As automation becomes embedded in enterprise systems, AI copilots will graduate to pilots, and organizations that employ agentic AI will be better positioned to innovate, compete and deliver value.

For more details on agentic AI, read the free whitepaper, “Systems of Automation: The Future of Enterprise Architecture Is Agentic.” And check out this page to learn more about Langflow. 

The post AI Agents Are About To Blow Up the Business Process Layer appeared first on The New Stack.

Read the whole story
alvinashcraft
8 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories