Read more of this story at Slashdot.
Read more of this story at Slashdot.
I'm sure you've seen many AI apps where you sit tight for 30s or and you wonder if things are stuck? Not a great experience right? Yes, you're right, you deserve better, so how do we fix it?
Type your prompt> Tell me a joke
.
.
.
.
.
.
.
.
Why could I never find the atoms, cause they split..
This series is about Copilot SDK and how you can leverage your existing GitHub Copilot license to integrate AI into your apps
By streaming the response, the response now arrives in chunks, pieces that you can show as soon as they arrive. How can we do that though and how can GitHub Copilot SDK help out?
Well, there's two things you need to do:
streaming to True when you call create_session.ASSISTANT_MESSAGE_DELTA and print out the chunk.
# 1. Enable streaming
session = await client.create_session({
"model": "gpt-4.1",
"on_permission_request": PermissionHandler.approve_all,
"streaming": True,
})
print("Starting streamed response:")
# Listen for response chunks
def handle_event(event):
if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
// 2. Chunk arrived, print it
sys.stdout.write(event.data.delta_content)
sys.stdout.flush()
if event.type == SessionEventType.SESSION_IDLE:
print() # New line when done
Here's what the full application looks like:
import asyncio
import sys
from copilot import CopilotClient, PermissionHandler
from copilot.generated.session_events import SessionEventType
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session({
"model": "gpt-4.1",
"on_permission_request": PermissionHandler.approve_all,
"streaming": True,
})
print("Starting streamed response:")
# Listen for response chunks
def handle_event(event):
if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
sys.stdout.write(event.data.delta_content)
sys.stdout.flush()
if event.type == SessionEventType.SESSION_IDLE:
print() # New line when done
session.on(handle_event)
print("Sending prompt...")
await session.send_and_wait({"prompt": "Tell me a short joke"})
await client.stop()
if __name__ == "__main__":
asyncio.run(main())
That's it folks, now go out and build better experiences for your users.
Electron 41 has been released! It includes upgrades to Chromium 146.0.7680.65, V8 14.6, and Node v24.14.0.
The Electron team is excited to announce the release of Electron 41! You can install it with npm via npm install electron@latest or download it from our releases website. Continue reading for details about this release.
If you have any feedback, please share it with us on Bluesky or Mastodon, or join our community Discord! Bugs and feature requests can be reported in Electron's issue tracker.
After publishing the initial 41.0.0 package, we integrated some high-priority bugs into follow-up patch releases. We recommend installing 41.0.2 when upgrading to Electron 41.
As of Electron 41, macOS Electron apps can now embed a digest of their ASAR Integrity information. This adds an additional layer of tamper detection for apps that use ASAR Integrity by validating the integrity information itself at app launch.
To enable the feature for your app, you can run the following command with @electron/asar v4.1.0 and above:
asar integrity-digest on /path/to/YourApp.app
You must re-sign your app afterwards. For more information, see the @electron/asar CLI documentation.
Support for this feature in Electron Forge is planned for the near future (electron/forge#4159).
On Wayland (Linux), frameless windows now have drop shadows and extended resize boundaries. To create fully frameless windows with no decorations, set hasShadow: false in the window constructor. #49885
Mitchell Cohen is writing a blog article about recent work to improve Electron's support of Wayland and client-side decorations on Linux. Watch this space!
The Electron team recently added MSIX auto-updater support according to RFC #21. You can now ship both MSIX and Squirrel.Mac in your update server essentially with the same JSON response format. See the autoUpdater documentation for more information.
This was added in Electron 41 by #49586 and has also been backported to Electron 39.5.0 (#49585) and 40.2.0 (#49587).
Chromium 146.0.7680.65
Node v24.14.0
V8 14.4
Electron 41 upgrades Chromium from 144.0.7559.60 to 146.0.7680.65, Node.js from v24.11.1 to v24.14.0, and V8 from 14.4 to 14.6.
--disable-geolocation command-line flag for macOS apps to disable location services. #45934disclaim option to the utilityProcess API to allow for TCC disclaiming on macOS. #49693 (Also in 39, 40)reason property to the Notification 'closed' event on Windows to allow developers to know the reason the notification was dismissed. #50029 (Also in 40)usePrinterDefaultPageSize option to webContents.print() to allow using the printer's default page size. #49812login event on webContents. #48512 (Also in 39, 40)--experimental-transform-types flag. #49882 (Also in 39, 40)long-animation-frame script attribution (via --enable-features=AlwaysLogLOAFURL). #49773 (Also in 39, 40)WebContents on navigation using webPreferences.focusOnNavigation. #49511 (Also in 40)WasmTrapHandlers fuse. #49839Previously, PDF resources created a separate guest WebContents for rendering. Now, PDFs are rendered within the same WebContents instead. If you have code to detect PDF resources, use the frame tree instead of WebContents.
Under the hood, Chromium enabled a feature that changes PDFs to use out-of-process iframes (OOPIFs) instead of the MimeHandlerViewGuest extension.
'changed' EventWe have updated the cookie change cause in the cookie 'changed' event.
When a new cookie is set, the change cause is inserted.
When a cookie is deleted, the change cause remains explicit.
When the cookie being set is identical to an existing one (same name, domain, path, and value, with no actual changes), the change cause is inserted-no-change-overwrite.
When the value of the cookie being set remains unchanged but some of its attributes are updated, such as the expiration attribute, the change cause will be inserted-no-value-change-overwrite.
Electron 38.x.y has reached end-of-support as per the project's support policy. Developers and applications are encouraged to upgrade to a newer version of Electron. See https://releases.electronjs.org/schedule to see the timeline for supported versions of Electron.
In the short term, you can expect the team to continue to focus on keeping up with the development of the major components that make up Electron, including Chromium, Node, and V8.
You can find Electron's public timeline here.
More information about future changes can be found on the Planned Breaking Changes page.
This episode we will have a look at changes to GitHub Copilot for Students and JetBrains new agentic IDE: Flow.
00:00 Intro
00:13 GitHub
01:14 JetBrains
-----
Links
GitHub
• Important Updates to GitHub Copilot for Students - https://github.com/orgs/community/discussions/189268
• Issue fields: Structured issue metadata is in public preview - https://github.blog/changelog/2026-03-12-issue-fields-structured-issue-metadata-is-in-public-preview/
JetBrains
• Sunsetting Code With Me - https://blog.jetbrains.com/platform/2026/03/sunsetting-code-with-me/
• Air Launches as Public Preview – A New Wave of Dev Tooling Built on 26 Years of Experience - https://blog.jetbrains.com/air/2026/03/air-launches-as-public-preview-a-new-wave-of-dev-tooling-built-on-26-years-of-experience/
-----
🐦X: https://x.com/theredcuber
🐙Github: https://github.com/noraa-junker
📃My website: https://noraajunker.ch
In this episode, Scott talks with Don Syme about the emerging world of agentic developer workflows and what it means when coding tools move from autocomplete helpers to collaborators. They explore how modern tools like GitHub Copilot and GitHub Agentic Workflows are evolving into systems that can plan, execute, and iterate on tasks across a codebase, and what that means for software design, type systems, and developer responsibility.
https://github.github.com/gh-aw/
There are many tutorials on using AI for coding, but far fewer on using AI for log analytics. We are living in exciting times as AI improves every day. If you spend most of your time troubleshooting business applications through log files, you might assume AI cannot help much yet. In most environments, scripts collect logs from production servers or client machines. On Windows, these event logs are usually the first place to look:
In addition, application-specific logs are usually captured as well. Depending on your setup, you may have one or more custom viewers to inspect them during production incidents. In cloud environments, vendor tooling (for example, Kusto in Azure) is excellent for analysis. Still, there are times when you need to export raw incident data and inspect it in detail with a text-based workflow. In on-premises container environments, this often means too much data and not enough capable log-viewing options.
When an error is hard to spot, you often end up with a folder full of logs in different formats. Then you have to sift through them manually to find patterns around the time of failure.
This process is tedious and time-consuming. It is also frustrating, because existing viewers often do not provide enough context to analyze the problem effectively.
One relatively new but powerful option is Copilot CLI, which lets you interact with Copilot directly from your terminal.
My current workflow is to copy all relevant data into one folder, start Copilot there, and give it full access to that folder:
C:\Issues\CrashAt16March_19_43_51>copilot ╭──────────────────────────────────────────────────────────────────────────────────────────────────╮ │ ╭─╮╭─╮ │ │ ╰─╯╰─╯ GitHub Copilot v1.0.5 │ │ █ ▘▝ █ Describe a task to get started. │ │ ▔▔▔▔ │ │ Tip: /usage Display session usage metrics and statistics │ │ Copilot uses AI, so always check for mistakes. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ C:\Issues\CrashAt16March_19_43_51 gpt-5.4 (medium) (1x) ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────❯ Type @ to mention files, # for issues/PRs, / for commands, or ? for shortcuts ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── shift+tab switch mode
Then you can ask questions like:
find root cause of process crash at 16th March 2026 at 19:43:51 look at all log files and -5 minutes and +30s around crash time.
Sometimes simple prompts produce excellent results. Other times, the AI lacks enough context about how services and components interact. In those cases, I switch to a data-pipeline approach.
You can ask Copilot to generate custom parsers for different log formats and merge the results into a common CSV file. Once you have that conversion tool, you can reuse it across future incidents.
You might ask questions such as:
To answer these questions for a specific incident, you need a custom viewer that can quickly correlate multiple log sources. In the past, building such a viewer or visualization could take weeks. With a data-pipeline approach, this becomes much easier. A practical starting point is to parse different log formats and convert them into a common format such as CSV (Comma-Separated Values). Copilot can help build that converter, and you can reuse it for future issues.

If you stay on well-known implementation paths for your custom viewer, you can get excellent results and iteratively add analysis features through prompts. For example, combining JavaScript and C# with WebView2 gives you a strong architecture: C# handles large-file parsing, while the web UI provides effective visualizations.
It is now possible to continuously create and update the viewer for each specific issue.

This is a game changer in terms of log analytics because you can finally change previously immutable log viewers which did lack features on the fly for specific issues. You just need to put the source code of your current viewer to your problem folder to adapt the viewer on the fly depending on the problem you are analyzing.
This is an ideal AI use case: small, self-contained tools that need frequent, lightweight tweaks. You can request timeline charts for specific messages and correlate them with other logs. Let AI accelerate the UI work while you focus on the deeper correlations it may miss due to limited context or system knowledge. Working with a viewer tailored to exactly what you need is more effective and more enjoyable. As a bonus, flexible visualizations help you learn far more about your data.
If you build a custom viewer this way, share your experience in the comments—especially what worked well and where this approach helped most.