The Consumer Financial Protection Bureau (CFPB) is sending out mass layoff notices that appear to be in defiance of a court order blocking further layoffs following DOGE-induced cuts.
“I regret to inform you that you are affected by a reduction in force (RIF) action,” says a notice reviewed by The Verge that was sent by CFPB Acting Director Russell Vought to an agency employee. “This RIF action is necessary to restructure the Bureau’s operation to better reflect the agency’s priorities and mission.” Access to CFPB systems will be cut off after Friday, and employees will be placed on administrative leave until their official end date, the notice says.
Fox Business reports that around 1,500 workers will receive RIF notices across core functions, based on an unnamed source. On Thursday night, CFPB Chief Legal Officer Mark Paoletta sent a notice of the agency’s supervision and enforcement priorities that said the CFPB would “shift resources away from enforcement and supervision that can be done by the States” and rescinded previous enforcement and supervision priority documents, The Wall Street Journal reported.
In March, a federal judge ordered the Trump administration not to “terminate any CFPB employee, except for cause related to the individual employee’s performance or conduct; and defendants shall not issue any notice of reduction-in-force to any CFPB employee.” An appeals court order this month partially stayed that portion of the injunction, but only to the extent it would keep the CFPB from issuing a RIF that the agency determined “after a particularized assessment, to be unnecessary to the performance of defendants’ statutory duties.”
The union that brought the original complaint to stop the agency from being gutted filed a motion late Thursday asking the court to require the government to explain how the mass terminations don’t violate its preliminary injunction. “The plaintiffs have been told that entire offices, including statutorily mandated ones, have or soon will be either eliminated or reduced to a single person,” the filing says. “It is unfathomable that cutting the Bureau’s staff by 90 percent in just 24 hours, with no notice to people to prepare for that elimination, would not ‘interfere with the performance’ of its statutory duties, to say nothing of the implausibility of the defendants having made a ‘particularized assessment’ of each employee’s role in the three-and-a-half business days since the court of appeals imposed that requirement.”
Sen. Elizabeth Warren (D-MA), the top Democrat on the Senate Banking Committee who helped establish the agency, called the agency’s “dismantling” of the agency “yet another assault on consumers and our democracy by this lawless Administration, and we will fight back with everything we’ve got.”
Updated March 17th: Added filing from CFPB worker union and statement from Sen. Elizabeth Warren.
This was originally published on our developer newsletter, GitHub Insider, which offers tips and tricks for devs at every level. If you’re not subscribed, go do that now—you won’t regret it (we promise).
If you’ve ever wondered which AI model is the best fit for your GitHub Copilot project, you’re not alone. Since each model has its own strengths, picking the right one can feel somewhat mysterious.
With models that prioritize speed, depth, or a balance of both, it helps to know what each one brings to the table. Let’s break it down together. 👇
Your mileage may vary and it’s always good to try things yourself before taking someone else’s word for it, but this is how these models were designed to be used. All that being said…
Let’s talk models.
Fast, efficient, and cost-effective, o4-mini and o3-mini are ideal for simple coding questions and quick iterations. If you’re looking for a no-frills model, use these.
✅ Use them for:
👀 You may prefer another model: If your task spans multiple files or calls for deep reasoning, a higher‑capacity model such as GPT‑4.5 or o3 can keep more context in mind. Looking for extra expressive flair? Try GPT‑4o.
Need solid performance but watching your costs? Claude 3.5 Sonnet is like a dependable sidekick. It’s great for everyday coding tasks without burning through your monthly usage.
✅ Use it for:
👀 You may prefer another model: For elaborate multi‑step reasoning or big‑picture planning, consider stepping up to Claude 3.7 Sonnet or GPT‑4.5.
These are your go-to models for general tasks. Need fast responses? Check. Want to work with text *and* images? Double check. GPT-4o and GPT-4.1 are like the Swiss Army knives of AI models: flexible, dependable, and cost-efficient.
✅ Use them for:
👀 You may prefer another model: Complex architectural reasoning or multi‑step debugging may land more naturally with GPT‑4.5 or Claude 3.7 Sonnet.
This one’s the power tool for large, complex projects. From multi-file refactoring to feature development across front end and back end, Claude 3.7 Sonnet shines when context and depth matter most.
✅ Use it for:
👀 You may prefer another model: For quick iterations or straightforward tasks, Claude 3.5 Sonnet or GPT‑4o may deliver results with less overhead.
Gemini 2.5 Pro is the powerhouse for advanced reasoning and coding. It’s built for complex tasks (think: deep debugging, algorithm design, and even scientific research). With its long-context capabilities, it can handle extensive datasets or documents with ease.
✅ Use it for:
👀 You may prefer another model: For cost-sensitive tasks, o4-mini or Gemini 2.0 Flash are more budget-friendly options.
Got a tricky problem? Whether you’re debugging multi-step issues or crafting full-on systems architectures, GPT-4.5 thrives on nuance and complexity.
✅ Use it for:
👀 You may prefer another model: When you just need a quick iteration on something small—or you’re watching tokens—GPT‑4o can finish faster and cheaper.
These models are perfect for tasks that need precision and logic. Whether you’re optimizing performance-critical code or refactoring a messy codebase, o3 and o1 excel in breaking down problems step by step.
✅ Use them for:
👀 You may prefer another model: During early prototyping or lightweight tasks, a nimble model such as o4‑mini or GPT‑4o may feel snappier.
Got visual inputs like UI mockups or diagrams? Gemini 2.0 Flash lets you bring images into the mix, making it a great choice for front-end prototyping or layout debugging.
✅ Use it for:
👀 You may prefer another model: If the job demands step‑by‑step algorithmic reasoning, GPT‑4.5 or Claude 3.7 Sonnet will keep more moving parts in scope.
Here’s the rule of thumb: Match the model to the task. Practice really does make perfect, and as you work with different models, it’ll become clearer which ones work best for different tasks. The more I’ve personally used certain models, the more I’ve learned, “oh, I should switch for this particular task,” and “this one will get me there.”
And because I enjoy staying employed, I would love to cheekily mention that you can (and should!) use these models with…
Good luck, go forth, and happy coding!
The post Which AI model should I use with GitHub Copilot? appeared first on The GitHub Blog.