Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
139300 stories
·
31 followers

Dashboards, or Launchpads?

1 Share
Laptop launches colorful rocket as two people watch.

I have a personal vendetta against “dashboards.” Not because they’re not useful — I actually think they’re extremely useful — but rather because they’re generally built with the wrong user in mind, then used by a completely different user and for a different use case.

Let’s look at the origins of dashboards, how our usage has evolved, and most importantly, how to create single-purpose dashboards — what I call launchpads — that are built for their intended purpose.

Wallboards Aren’t for Debugging

Dashboards, as a lot of people think of them today, are more like “wallboards” — built as if they’re going to be put on a 75-inch wall-mounted TV in an office, thinking folks might spot an issue by looking at them. However, these dashboards end up being leveraged by engineers who use them as launchpads to investigate their systems.

The dashboards that are truly useful are curated around system problems, built to serve engineers on their machines. They’re defined by the teams who build and support their code to bring relevant information together so that, in the event of an incident, they can use that as the first — but crucially, not the only — place to look. They’re where on-call engineers go when alerts are fired.

Horses, Telemetry and Real-Time Decisions

The origin of the term “dashboard” (or simply “dash board”) is not modern; it’s actually really old. From what I can tell, the term originated from horse-drawn carriages where a wooden or leather board/apron was added in front of the driver to stop them being hit with debris from the road when the horses “dashed” (galloped faster), hence the name “dash board.” Over time, as we moved away from horse-drawn carriages into combustion engines, the panel in front of the driver became the dashboard.

The dashboard was then used to house readouts from the various instruments monitoring the vehicle’s vital information — for example, the fuel gauge, tire pressure or engine temperature. This information is important as the driver uses it to make real-time decisions. The term “dashboard” became the name for the place where we put our instrument panels.

This is something that translates well into the way we use dashboards today — or at least the principles we use to create them. We think about how we can use the details of the board to make real-time decisions as we watch them, hence why we place so much emphasis on autorefreshing.

My questions: Is that really how people use their dashboards? And if so, is that the most effective use of their time?

Metrics, Metrics Everywhere!

I think we settled on these “wallboard”-style graphs because Network Operations Centers (NOCs) were the pinnacle of monitoring. NOCs are amazing — the staff are some of the most diligent and intelligent people I’ve ever met. However, the issues they’re looking for and debugging are very different from those of a software development company.

Infrastructure analysis is a great use case for metrics (pre-aggregated time-series data with minimal dimensions) since we don’t need to be able to look at individual packet data. Watching CPUs for persistent spikes and correlating that with network traffic is great. At the time, that’s all we had — and because the software itself was fairly simple and noncritical, we didn’t have to worry too much about the internal details of our applications.

This idea that all companies need a NOC — and that a NOC is built in a particular way — has made engineers believe that they should have tons of wallboards, and that they should include graphs, which require metrics. The reality, however, is that NOCs are a different kind of monitoring — and they are about monitoring, not debugging or observability. What engineers who write applications need is not the same as what an operator needs in a NOC.

The other key part is that wallboards were built and curated by the people who built the machines, networks and overall infrastructure they were monitoring. To be clear, the people who built the networks built them in such standardized and uniform ways that building the dashboards was roughly the same from organization to organisation or data center to data centre.

In the midst of all this, Grafana became the standard visualization tool for metric data, as it still is for a lot of companies today. We started to see the proliferation of metric data from our off-the-shelf devices, even for commercial software products, meaning that these devices and products could provide standardized approaches to monitoring them.

Grafana added features like importing pre-built dashboards, features to combine data from different metrics databases in a single view and adding various different visualizations of that data. It was a glorious time for home enthusiasts who had dashboards for their home network, because graphs were cool, right? Right?! And having lots of graphs on a single monitor in your office was very much the “in” thing for geeks like me.

The question is, did these dashboards add value to my daily life? Nope. They did, however, make me feel cool, like I was doing something properly. They may have helped if I was getting a slow download, as I could glance over to the monitor and see if there was other traffic. They also taught me a lot about building — and most importantly, maintaining— monitoring systems. Namely, that I never want to do that myself!

Debugging in a Distributed World

While this revolution in monitoring was going on, we saw the rise of distributed systems. Later, event-driven architectures and microservices. Then even later, nanoservices and serverless. These different types of complex systems changed the way we thought about reliability, uptime, and ultimately, how we reasoned about the system’s behavior.

We found that our systems were essentially large Rube Goldberg machines of our own making, and that we needed a lot more information than percentile graphs to understand why something went wrong. From the complexity of the code itself, to the architectural design of the large distributed systems they live in, we just don’t have enough information from graphs.

We got to the stage where we could no longer diagnose the cause of issues just by looking at a dashboard. That didn’t diminish the usefulness of the dashboard for notification and overview of the situation — on the contrary, it means that the dashboards are a good starting point for the investigation.

Debugging Needs Direction

What we found was that these complex systems fail in interesting ways. They’re not always obvious, but generally, there’s some kind of graph somewhere that can indicate where to look for the underlying failure, but it won’t tell you why.

Enter dashboards again! But this time, their focus isn’t going on a TV in an office. Now, we’re using those dashboards as a “one-stop shop” of places to click, with contextual information that will help us uncover why something is failing. We use them for signposting, guiding the direction of debugging.

They’re a place for engineers who support applications to go as a first click from a runbook or alert on their journey to debugging. This is why I suggest that these are debugging launchpads. They’re not a destination; they’re the first stop on the journey.

The important characteristics of a launchpad over a wallboard are that we use a graphical representation of data to show curated, correlated insights about a problem or a service’s performance. We create these representations to show people where to look for problems, but not to notify them or immediately fix the problems. We use this data to help them find the next question to ask.

The best way to do this is to make each representation of data a link (or signpost) to launch into more questions or further investigation. I’ve equated this approach to the “Enhance! Enhance!” you’ve seen in TV shows like “CSI” where a pixelated image gives you an idea of where you want to look, but only by zooming and enhancing the data in a specific area are you able to see its true nature.

Ask More Questions, Get More Answers

We need to stop thinking about dashboards as a static representation of data and start thinking about them as a tool to aid in debugging. We need to understand that from each graphical representation, there needs to be a next step, another question to ask or an answer to be gained.

Make each panel on a board the start of an investigation. Launch the viewer into a path of questions that will give them the information they need to understand what’s going on.

The post Dashboards, or Launchpads? appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Code. Create. Commit. Welcome to dev/core

1 Share

What does it mean to be a developer? That question was at the heart of our thinking behind the new GitHub Shop collection: dev/core. The collection celebrates the developer’s layered experience—from the code, through the world of creation, to the unique identity of you, the developer, the builder, the person at the core of it all. 

Ok, that sounds poetic, we hear you say. But how does that translate into merch? Our dev/core collection captures what it is to be a developer but also brings an exciting update to our core basics. Made by developers, for developers. Let’s dive into it. 

A developer from head to toe

The <header> cap and <footer> socks are for those who know their way around a codebase—and an outfit. The cap kicks things off, a nod to the top of every great project. Down below, the socks wrap things up with comfort. Together, they bookend your look the way you bookend your code.

Getting back to the basics

An image of a woman wearing a hoodie with the Octocat in ASCII and a man wearing a GitHub Copilot hoodie.

Inspired by the all-time favorite black Invertocat hoodie, these two new builds level up your dev uniform. One features our iconic Octocat mascot reimagined in ASCII. The other reps GitHub Copilot, your favorite AI pair programmer. One nods to our roots as developers. The other looks to what’s next.

For when your brain hits Ctrl+Alt+Vibes

An image of a woman wearing a tie dye tee shirt.

Throw it back to your first build—when the code was janky, the caffeine was flowing, and the dream was big. This tie-dye tee channels that raw, colorful chaos that got you into being a dev in the first place. It’s got startup energy. Garage band energy. “I learned CSS on a forum in 2004” energy.

The graph you obsess over (now in tote form)

A woman holding a tote bag with a GitHub contribution graph in the shape of the Invertocat logo.

There’s something deeply satisfying about watching your contribution graph fill up day by day, square by square, with every commit and small (or large) breakthrough. This tote celebrates that love with a contribution graph in the shape of our Invertocat, worn proudly on your side. 

Write code, wear code

A man wearing an ASCII tee shirt.

The ASCII tee is a tribute to the early days of building—when text was all you had and all you needed. It’s a direct tribute to the roots of development—where every line of code is a building block.

Look familiar? You might recognize it from thegithubshop.com homepage, where we’ve created your very own interactable version. You can spin it, shake it, fidget with it— perfect for when your stand-up is getting a little dull. 

Made for developers, by developers

Developers are at the heart of what we do, because they’re the core of who we are. Our shop isn’t just a shop. It’s also chock-full of fun developer finds, and we’re not just talking about the swag now. We’ve even added a hidden CLI: type git [space] into the search bar. Have fun!

In our dev/core collection, you can mix and match to create new patterns on our images by tapping on the dev/core pill. This unlocks a tool palette to customize the ASCII pattern, size, and speed.

The dev/core collection is more than merch—it’s a wearable nod to the builders, the dreamers, and the committers who shape the internet every day. From the clean lines of ASCII art to the playful and colorful additions, each piece is carefully designed for you. So whether you’re pushing code, sipping coffee, or staring into the abyss of your terminal, suit up in something that gets it. This is your core. 

🤫 Psst… use the code “GITHUBBLOG15” at checkout to get free shipping from today until June 1. Your laptop’s looking a bit bare, btw. We’ve dropped a few new stickers in the mix too—just saying.

Check out the dev/core collection at thegithubshop.com

The post Code. Create. Commit. Welcome to dev/core appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Vibe coding: Your roadmap to becoming an AI developer

1 Share

Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. Sign up now for more career-focused content > 

Pop quiz: What do healthcare, self-driving cars, and your next job all have in common? 

If you guessed AI, you were right. And with 80% of developers expected to need at least a fundamental AI skill set by 2027, there’s never been a better time to dive into this field.

This blog will walk you through what you need to know, learn, and build to jump into the world of AI—using the tools and resources you already use on GitHub. 

Let’s dive in.

1. Learn essential programming languages and frameworks 💬

    Mastering the right programming languages and tools is foundational for anyone looking to excel in AI and machine learning development. Here’s a breakdown of the core programming languages to zero in on:

    • Python: Known for its simplicity and extensive library support, Python is the cornerstone of AI and machine learning. Its versatility makes it the preferred language for everything from data preprocessing to deploying AI models. (Fun fact: Python overtook JavaScript as the number one programming language in 2024!)
    • Java: With its scalability and cross-platform capabilities, Java is popular for enterprise-level applications and large-scale AI systems.
    • C++: As one of the fastest programming languages, C++ is often used in performance-critical applications like gaming AI, real-time simulations, and robotics.

    Beyond programming, these frameworks give you the tools to design, train, and deploy intelligent systems across real-world applications:

    • TensorFlow: Developed by Google, TensorFlow is a comprehensive framework that simplifies the process of building, training, and deploying AI models.
    • Keras: Built on top of TensorFlow, Keras is user-friendly and enables quick prototyping.
    • PyTorch: Favored by researchers for its flexibility, PyTorch provides dynamic computation graphs and intuitive debugging tools.
    • Scikit-learn: Ideal for traditional machine learning algorithms, Scikit-learn offers efficient tools for data analysis and modeling.

    Spoiler alert: Did you know you can learn programming languages and AI frameworks right on GitHub? Resources like GitHub Learning Lab, The Algorithms, TensorFlow Tutorials, and PyTorch Examples provide hands-on opportunities to build your skills. Plus, tools like GitHub Copilot provide real-time coding assistance that can help you navigate new languages and frameworks easily while you get up to speed.


     2. Master machine learning 🤖

    Machine learning (ML) is the driving force behind modern AI, enabling systems to learn from data and improve their performance over time. It bridges the gap between raw data and actionable insights, making ML expertise a must-have if you’re looking for a job in tech. Here are some key subfields to explore:

    • Deep learning: A subset of ML, deep learning uses multi-layered neural networks to analyze complex patterns in large datasets. While neural networks are used across ML, deep learning focuses on deeper architectures and powers advancements like speech recognition, autonomous vehicles, and generative AI models.
    • Natural language processing (NLP): NLP enables machines to understand, interpret, and respond to human language. Applications include chatbots, sentiment analysis, and language translation tools like Google Translate.
    • Computer vision: This field focuses on enabling machines to process and interpret visual information from the world, such as recognizing objects, analyzing images, and even driving cars.

    Luckily, you can explore ML right on GitHub. Start with open source repositories like Awesome Machine Learning for curated tools and tutorials, Keras for deep learning projects, NLTK for natural language processing, and OpenCV for computer vision. Additionally, try real-world challenges by searching for Kaggle competition solutions on GitHub or contribute to open source AI projects tagged with “good first issue” to gain hands-on experience. 


    3. Build a GitHub portfolio to showcase your skills 💼

    A strong GitHub portfolio highlights your skills and AI projects, setting you apart in the developer community. Here’s how to optimize yours:

    • Organize your repositories: Use clear names, detailed README files, and instructions for others to replicate your work.
    • Feature your best work: Showcase projects in areas like NLP or computer vision, and use tags to improve discoverability.
    • Create a profile README: Introduce yourself with a professional README that includes your interests, skills, and standout projects.
    • Use GitHub Pages: Build a personal site to host your projects, case studies, or interactive demos.
    • Contribute to open source: Highlight your open source contributions to show your collaboration and technical expertise.

    For detailed guidance, check out the guides on Building Your Stunning GitHub Portfolio and How to Create a GitHub Portfolio.


    4. Get certified in GitHub Copilot 🏅

    Earning a certification in GitHub Copilot showcases your expertise in leveraging AI-powered tools to enhance development workflows. It’s a valuable credential that demonstrates your skills to employers, collaborators, and the broader developer community. Here’s how to get started:

    • Understand GitHub Copilot: GitHub Copilot is an AI agent designed to help you write code faster and more efficiently. Familiarize yourself with its features, such as real-time code suggestions, agent mode in Visual Studio Code, model context protocol (MCP), and generating boilerplate code across multiple programming languages.
    • Explore certification options: GitHub offers certification programs through its certification portal. These programs validate your ability to use GitHub tools effectively, including GitHub Copilot. They also cover key topics like AI-powered development, workflow automation, and integration with CI/CD pipelines.
    • Prepare for the exam: Certification exams typically include theoretical and practical components. Prepare by exploring GitHub Copilot’s official documentation, completing hands-on exercises, and working on real-world projects where you utilize GitHub Copilot to solve coding challenges.
    • Earn the badge: Once you complete the exam successfully, you’ll receive a digital badge that you can showcase on LinkedIn, your GitHub profile, or your personal portfolio. This certification will enhance your resume and signal to employers that you’re equipped with cutting-edge AI development tools.

    Check out this LinkedIn guide for tips on becoming a certified code champion with GitHub Copilot. 

    Source

    The post Vibe coding: Your roadmap to becoming an AI developer appeared first on The GitHub Blog.

    Read the whole story
    alvinashcraft
    3 hours ago
    reply
    Pennsylvania, USA
    Share this story
    Delete

    A Practical Roadmap for Adopting Vibe Coding

    1 Share
    The words "Good Vibes" and a bunch of pixilated symbols.

    A new wave of generative AI tools is redefining the way we build software and who can participate in the process. At the forefront of this revolution is “vibe coding” — using natural language prompts to generate functional code through AI assistance.

    Recent industry data shows that nearly half of developers had already integrated AI coding tools by 2023, with vibe coding projects demonstrating measurable efficiency improvements. Vibe coding lowers the barriers to entry for development. However, that also leads to lower quality. AI provides the “vibe,” or the suggested pattern, and some developers might accept it without critical evaluation or deep comprehension.

    Traditional development approaches rely heavily on specific programming languages and syntax rules. Vibe coding lowers the need to comprehend every language and development pattern’s nuances fully, but it does not eliminate that need. This tension between accessibility and quality reflects a broader transformation in software creation.

    AI is fundamentally shifting what development means. Team members can focus on desired outcomes rather than implementation details. Logic, business requirements and user experience precede syntax correctness and language expertise. Organizations increasingly value professionals who can effectively bridge product vision with technical execution — often without writing traditional code.

    While vibe coding offers tremendous potential to accelerate development and democratize software creation, it must be implemented thoughtfully with proper governance to ensure that speed doesn’t come at the expense of quality and maintainability.

    Agentic AI and Vibe Coding

    Vibe coding represents an early step in AI-assisted development, and agentic AI furthers this evolution.

    Vibe coding is about getting something to appear to work quickly rather than building a robust, efficient and maintainable solution based on solid knowledge. This is where agentic AI can help. Agents can take abstract instructions like “build a customer database” and autonomously handle all the technical implementation details, bridging the gap between quick prototypes and properly engineered solutions.

    While vibe coding primarily focuses on code generation through natural language prompts, agentic AI expands these capabilities into an autonomous development ecosystem. This distinction is essential. Vibe coding involves a human developer using AI without requiring deep understanding. Agentic AI involves an AI system taking on a more proactive planning and autonomous role in building software based on a given goal.

    The relationship between vibe coding and agentic AI is symbiotic. Vibe coding provides the foundation for human-AI interaction through natural language, while agentic systems build upon this foundation to create self-directed development partners. These intelligent systems respond to prompts and anticipate needs, make independent decisions and take action with minimal supervision.

    Agentic AI systems enhance vibe coding by integrating deeply into development workflows, conducting sophisticated code reviews, recommending infrastructure optimizations and adapting to changing requirements. Industry research from Deloitte indicates that 25% of companies using generative AI will implement agentic AI pilots in 2025, which is expected to double by 2027.

    Implementing vibe coding and agentic AI together requires careful planning. Organizations must establish comprehensive security protocols, ensure compliance with data regulations, and create clear communication channels between AI systems and existing tools. Despite these implementation challenges, the combined power of vibe coding and agentic AI offers compelling benefits in development speed, code quality and resource optimization.

    Taking an Evolutionary Approach to Implementation

    Development teams and technical leaders can follow this evolutionary path to effectively implement vibe coding and agentic AI:

    1. Begin with AI assistance: Introduce developers to AI tools that improve productivity for routine tasks. Focus on building familiarity, comfort and confidence with AI assistance for coding, documentation and simple problem-solving.
    2. Expand AI assistance across the software development life cycle: Move beyond just code writing to integrate AI tools into testing, debugging, code review and documentation. Identify repetitive, time-intensive workflows where AI can create immediate value with minimal disruption.
    3. Establish governance frameworks and interoperability standards: Create clear policies for use of AI tools, including data access permissions, security protocols and quality standards. Define protocols for how AI systems will share information and collaborate across platforms.
    4. Introduce autonomous AI agents for specific tasks: Deploy agents to handle self-contained development tasks with a degree of autonomy. These agents take abstract goals like “optimize this database query” and handle the implementation details independently while maintaining code quality.
    5. Scale agent implementation across the organization: Expand the scope of tasks handled by agents and introduce multiple agents working together on complex projects. Integrate agents deeply into the end-to-end software development life cycle and redesign team structures to create cross-functional groups combining technical expertise and domain knowledge.
    6. Continuously improve through feedback and education: Implement systems to monitor agent performance with clear metrics and correction protocols. Invest in organizationwide AI literacy through training programs for prompt engineering, AI collaboration techniques and effective system oversight.

    This evolutionary approach ensures technical implementation and organizational leadership progress together in the AI transformation journey, maximizing the benefits of vibe coding while building robust, efficient solutions.

    The Changing Developer Landscape

    The engineering role is evolving as vibe coding and agentic AI handle more routine development tasks. Less experienced developers face a steeper learning curve with fewer straightforward tasks available for initial skill-building. Simultaneously, senior engineers must adapt as AI takes over traditional oversight responsibilities.

    The industry is witnessing growing demand for new specialized roles like prompt engineers who effectively guide and refine AI outputs. The most valuable skills now include architecture design, strategic thinking and the ability to collaborate with AI systems effectively.

    While these shifts may create downward pressure on certain roles and salaries, they also create opportunities for developers who embrace AI as partners rather than threats. The most successful engineers will be those who leverage AI to handle routine tasks while focusing their expertise on innovation and strategic problem-solving.

    Organizations that embrace vibe coding and agentic AI gain significant competitive advantages through accelerated development cycles, improved code quality and more efficient resource allocation. Those who fail to adapt risk being outpaced in an increasingly AI-powered development landscape.

    The post A Practical Roadmap for Adopting Vibe Coding appeared first on The New Stack.

    Read the whole story
    alvinashcraft
    3 hours ago
    reply
    Pennsylvania, USA
    Share this story
    Delete

    Addendum to o3 and o4-mini system card: Codex

    1 Share
    Codex is a cloud-based coding agent. Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. codex-1 was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and iteratively runs tests until passing results are achieved.
    Read the whole story
    alvinashcraft
    3 hours ago
    reply
    Pennsylvania, USA
    Share this story
    Delete

    Introducing Codex

    1 Share
    Introducing Codex: a cloud-based software engineering agent that can work on many tasks in parallel, powered by codex-1. With Codex, developers can simultaneously deploy multiple agents to independently handle coding tasks such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review.
    Read the whole story
    alvinashcraft
    3 hours ago
    reply
    Pennsylvania, USA
    Share this story
    Delete
    Next Page of Stories