Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122343 stories
·
29 followers

Microsoft is testing a new Windows 11 Start menu with floating widgets

1 Share
A screenshot of widgets on the Start menu
Image: Albacore (X)

Microsoft has quietly started testing an intriguing change to the Windows 11 Start menu that could introduce a floating panel full of “companion” widgets. Windows watcher Albacore discovered the new Start menu feature in the latest test versions of Windows 11 that Microsoft has released publicly.

While Microsoft has not yet announced this feature, the “Start menu Companions” appear to be a way to allow developers to extend the Windows 11 Start menu with widget-like functionality that lives inside a floating island that can be docked next to the Start menu. It looks like developers will be able to build apps that provide widget-like information through adaptive cards — a platform-agnostic way of displaying UI blocks of information.

Continue reading…

Read the whole story
alvinashcraft
11 minutes ago
reply
West Grove, PA
Share this story
Delete

Wasm vs. Docker: Performant, Secure, and Versatile Containers

1 Share

Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more

Thanks to Sohan Maheshwar, Developer Advocate Lead at Fermyon, for collaborating on this post.

Read the whole story
alvinashcraft
13 minutes ago
reply
West Grove, PA
Share this story
Delete

SQL Server Networking troubleshooting documentation expanded

1 Share

The SQL Server CSS (technical support) and Content teams have been publishing documentation on troubleshooting SQL Server networking issues.

Here is a list of articles that have been produced over the past few months.

 

Much of the content for these articles was the work of Malcolm Stewart - a tenured SQL Networking Escalation Engineer who has left a large legacy of documentation and troubleshooting tools in the SQL networking domain.  Also, the following individuals contributed many hours of reviews, ideas, improvements, and content creation and organization: Pradeep Madheshiya, Padma Jayaraman, Seven Dong, Haiying Yu, and Joseph Pilov.

 

Hope you find these useful. Please share with others and don’t hesitate to provide feedback at the bottom of each article page by clicking on “Was this page helpful?”. Also be on the look-out for more as we are not finished yet. 

 

 

Thank you!

 

Read the whole story
alvinashcraft
13 minutes ago
reply
West Grove, PA
Share this story
Delete

Exploring Microsoft's Phi-3 Family of Small Language Models (SLMs) with Azure AI

1 Share

Microsoft's Phi-3 family of small language models (SLMs) has been gaining a lot of attention recently, and for good reason. These SLMs are powerful yet lightweight and efficient, making them perfect for applications with limited computational resources.


In this blog post, we will explore how to interact with Microsoft's Phi-3 models using Azure AI services and the Model catalog. We'll dive into the process of deploying and integrating these models into real-world applications, as well as practical exercises to solidify your understanding of the technology.


We'll also create our own chatbot interface powered by Phi-3 using Gradio. This will allow you to interact with the model in a user-friendly way, helping you gain confidence in deploying and integrating AI into applications.

If you're interested in learning more about Microsoft's Phi-3 models and how to use them with Azure AI services, keep reading!

Step 1: Set Up Your Azure Account

Before you dive into using Phi-3, you’ll need to set up an Azure account if you don’t already have one. Visit the Azure website and follow the sign-up instructions. All students get Azure for Student with $100 of credit simply register at http://aka.ms/azure4student 


Step 2: Access the Azure AI Model Catalog

Once your account is set up, navigate to the Azure AI Model Catalog where you’ll find the Phi-3 model(s) listed. You can also browse more than 1500+ frontier and open models from LLM providers including: HuggingFace, Meta, Mistral, Cohere and many more.

LeeStott_0-1715260171839.png

 


Step 3: How to deploy large language models with Azure AI Studio and Deploy to an online managed endpoint

 

Deploying a large language model (LLM) makes it available for use in a website, an application, or other production environments. This typically involves hosting the model on a server or in the cloud, and creating an API or other interface for users to interact with the model. You can invoke the deployment for real-time inference for chat, copilot, or another generative AI application.


Deploy Open Models to Azure AI Studio

 

Follow the steps below to deploy an open model such as distilbert-base-cased to a real-time endpoint in Azure AI Studio.

  1. Choose a model you want to deploy from the Azure AI Studio model catalog. Alternatively, you can initiate deployment by selecting + Create from your project>deployments

  2. Select Deploy to project on the model card details page.

  3. Choose the project you want to deploy the model to.

  4. Select Deploy.

  5. You land on the deployment details page. Select Consume to obtain code samples that can be used to consume the deployed model in your application.

You can use the Azure AI Generative SDK to deploy an open model. In this example, you deploy a distilbert-base-cased model.

 

#Import the libraries from azure.ai.resources.client import AIClient from azure.ai.resources.entities.deployment import Deployment from azure.ai.resources.entities.models import PromptflowModel from azure.identity import DefaultAzureCredential

 

 

Credential info can be found under your project settings on Azure AI Studio. You can go to Settings by selecting the gear icon on the bottom of the left navigation UI.

 

credential = DefaultAzureCredential() client = AIClient( credential=credential, subscription_id="<xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>", resource_group_name="<YOUR_RESOURCE_GROUP_NAME>", project_name="<YOUR_PROJECT_NAME>", )

 

Define the model and the deployment. The model_id can be found on the model card on Azure AI Studio model catalog.

 

model_id = "azureml://registries/azureml/models/distilbert-base-cased/versions/10" deployment_name = "my-distilbert-deployment" deployment = Deployment( name=deployment_name, model=model_id, )

 

Deploy the model. You can deploy to a real-time endpoint from here directly! Optionally, you can use the Azure AI Generative AI SDK to deploy any model from the model catalog.

 

client.deployments.create_or_update(deployment)

 

 

Delete the deployment endpoint

Deleting deployments and its associated endpoint isn't supported via the Azure AI SDK. To delete deployments in Azure AI Studio, select the Delete button on the top panel of the deployment details page.


Quota considerations

Deploying and inferencing with real-time endpoints can be done by consuming Virtual Machine (VM) core quota that is assigned to your subscription a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once that happens, you can request for quota increase.


Recommended virtual machine skus: within Azure AI Studio to run Phi-3 

Smallest VM $
Standard_NC6s_v3  *This is the max size for Azure for Students*

Standard_NC12s_v3
, 
Standard_NC24s_v3
, 
Standard_ND40rs_v2
, 
Standard_NC24ads_A100_v4
, 
Standard_NC48ads_A100_v4
, 
Standard_NC96ads_A100_v4
, 
Standard_ND96asr_v4
, 
Standard_ND96amsr_A100_v4
Largest $$$

NOTE: If your a student you can run a single VM with a maximum of 6GPUs so please select the Standard_NC6s_v3 phi-3-mini-instruct is small enough to run on a local device, so smaller VMs will work for this demo so save your credit or costs.

 

3. Setting up your Python Environment Make sure you have the following prerequisites:

  • An Azure Machine Learning workspace
  • The requirements.txt with the following Python Libraries which are required to be installed.
    azure-ai-ml
    azure-identity
    datasets
    gradio
    pandas
  • Note: You can install the libraries into your existing environment by using pip install 

 

pip install -r requirements.txt

 

  • An instance of the Phi-3 model deployed to an Online Endpoint
    Note: You can immediately start using the Phi-3 model through an Azure Managed Online Endpoint. Azure Managed Online Endpoints allow you to deploy your models as a web service easily.


4. Getting Started on your code 

 

#Importing Required Libraries #We are importing the necessary libraries. MLClient is the main class that we use to #interact with Azure AI. DefaultAzureCredential and InteractiveBrowserCredential are used #for authentication purposes. The os library is used to access environment variables. from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential import os​

 

Next, we will set up the credentials to authenticate with Azure. We first try to use the DefaultAzureCredential. If that fails (for example, if we are running the code on a machine that is not logged into Azure), we fall back to using InteractiveBrowserCredential, which will prompt the user to log in.

 

#Setting Up Credentials #Here, we are setting up the credentials to authenticate with Azure. #We first try to use the DefaultAzureCredential. If that fails, we fall back to using #InteractiveBrowserCredential, which will prompt us to log in. try: credential = DefaultAzureCredential() credential.get_token("https://management.azure.com/.default") except Exception as ex: credential = InteractiveBrowserCredential()

 

Finally, we create an MLClient for our Azure AI workspace. We use environment variables to get the subscription ID, resource group name, and workspace name.

 

#Creating an MLClient for Workspace #In this cell, we create an MLClient for our Azure AI workspace. #We use environment variables to get the subscription ID, #resource group name, #and workspace name. workspace_ml_client = MLClient( credential, subscription_id=os.getenv("SUBSCRIPTION_ID"), resource_group_name=os.getenv("RESOURCE_GROUP"), workspace_name=os.getenv("WORKSPACE_NAME"), )

 

Loading and using other data sets with Phi-3

 

#Loading & using Dataset #Experimenting with a dataset #Step 1. Import the necessary python library we recommend pandas import pandas as pd from datasets import load_dataset #Step 2. Load in your dataset #example: we are using the ultrachat_200k dataset from Hugging Face #select the test_sft split. #Note:You can use other dataset simply replace the location and test_sft below dataset = load_dataset("HuggingFaceH4/ultrachat_200k")["test_sft"] #Step 3.Convert the dataset into a pandas DataFrame #Cleaning Data view: we want to make the data cleaner so we can additional drop columns. #Example: We want drop 'prompt_id' and the 'messages' columns. These columns are not needed for our current task. df = pd.DataFrame(dataset).drop(columns=["prompt_id", "messages"]) #Step 4. Displaying a random sample of x rows from the DataFrame. T #Note: This gives us a quick look at the data we'll be working with. #Example: In this case we are choosing 5 rows simply replace 5 with the number of rows required. df.sample(5)

 

Next, we want to test our model with random sample, we want to ensures that we have a diverse range of topics to test our model with, and that the testing process is as unbiased as possible.

 

#Creating a Random Sample #Random sample from our dataset for our test case for Phi-3. #Step 1. First, we sample 5 random examples from the DataFrame and convert them to a list. examples = df.sample(5).values.tolist() #Step2. We convert the examples to a JSON string. examples_json = json.dumps(examples, indent=2) #Step 3. Selecting a random index from the examples. #We use the random.randint function #This returns a random integer within the specified range. i = random.randint(0, len(examples) - 1) #We use this random index to select an example from our list. sample = examples[i] print(sample)

 

Getting a response to the inputted prompt (user question)

 

#Getting the Phi-3 model to generate a response to a user's question. #Step 1. Define the input data. # This includes the user's message their prompt/question # We also need some additional parameters for the Phi-3 model. # The parameters control the randomness of the model's output Temperature, top_p do_Sample, and max_new_tokens. messages = { "input_data": { "input_string": [ { "role": "user", "content": "This is the users input question or prompt?" } ], "parameters": { "temperature": 0.7, "top_p": 0.9, "do_sample": True, "max_new_tokens": 500 } } } #Step 2. We write the input data to a temporary file. #The invoke method of the workspace_ml_client.online_endpoints object #requires a file as input. with tempfile.NamedTemporaryFile(suffix=".json", delete=False, mode='w') as temp: json.dump(messages, temp) temp_file_name = temp.name #Step 3. Invoking the Phi-3 model and providing the response. #The invoke method sends the input data to the model and returns the model's output. response = workspace_ml_client.online_endpoints.invoke( #You will find the endpoint_name and deployment_name details available under #Build > Components > Deployments > then, select the deployment you created. endpoint_name="Replace this with your model Name", deployment_name="Replace this with your deployment Name", request_file=temp_file_name, ) #Step 4. We get the response from the model, parse it and add it to the input data. #This allows us to build up the conversation history. #Display the message back to the user we print the updated input data. #This includes the user's message and the model's response. response_json = json.loads(response)["output"] response_dict = {'content': response_json, 'role': 'assistant'} messages['input_data']['input_string'].append(response_dict) print(json.dumps(messages["input_data"]["input_string"],indent=2))

 

Now we want to test the model so we need to use the sample data we created.

 

#Step1. Importing the random data and files we created previously import json import tempfile import random #Step 2. Using a random sample from the examples i = random.randint(0, len(examples) - 1) sample = examples[i] #Step 3. Define the input data messages = { "input_data": { "input_string": [{"role": "user", "content": sample[0]}], "parameters": { "temperature": 0.7, "top_p": 0.9, "do_sample": True, "max_new_tokens": 500, }, } } #Step 4. Write the input data to a temporary file with tempfile.NamedTemporaryFile(suffix=".json", delete=False, mode="w") as temp: json.dump(messages, temp) temp_file_name = temp.name #Step 5. Invoking the Phi-3 model and get the response response = workspace_ml_client.online_endpoints.invoke( endpoint_name="Replace with your endpoint", deployment_name="Replace with your deployment", request_file=temp_file_name, ) #Step 6. Parse the response and add it to the input data response_json = json.loads(response)["output"] response_dict = {'content': response_json, 'role': 'assistant'} messages['input_data']['input_string'].append(response_dict) #Step 7. Display the updated input data print(json.dumps(messages["input_data"]["input_string"],indent=2))

 

Now we want to create UI to experiement with our Model and Chat inputs and responses. This process creates a user-friendly chat interface for the Phi-3 model.

 

#Building the Chat Interface #Step 1.Using Gradio to create a UI. #Define a function predict this will takes a message/input #Plus the history of previous messages as input. #This function prepares the input data for the Phi-3 model #Invokes the model, and processes the model's response. def predict(message, history): messages = { "input_data": { "input_string": [ ], "parameters": { "temperature": 0.6, "top_p": 0.9, "do_sample": True, "max_new_tokens": 500, }, } } for user, assistant in history: messages["input_data"]["input_string"].append({"content": user, "role": "user"}) messages["input_data"]["input_string"].append({"content": assistant, "role": "assistant"}) messages["input_data"]["input_string"].append({"content": message, "role": "user"}) ... #Step 2.Create a Gradio interface for it. #This interface includes a textbox for the user to enter their message gr.ChatInterface( fn=predict, textbox=gr.Textbox( value="Ask any questiom?", placeholder="Ask me anything...", scale=5, lines=3, ), chatbot=gr.Chatbot(render_markdown=True), examples=examples, title="Phi-3: This is a response example!", fill_height=True, ).launch()

 

As technical students, exploring Microsoft's Phi-3 family of small language models (SLMs) and their integration into applications via the Azure AI Model catalog has revealed that powerful AI can be achieved with lighter and more efficient models. This blog post aimed to illustrate this concept by showcasing the benefits of using Phi-3 models, including step-by-step guidance for deploying and integrating AI into applications, as well as practical exercises like our Gradio-powered chatbot.

Don't stop here – continue your exploration of AI with Azure AI and keep learning and building. We would love for you to share what your building and if you'd like to see more content, and consider sharing this tutorial with colleagues or through your professional network to help grow the field of AI for all. I look forward to seeing what you create with Azure AI and you can check out our new Phi-3 CookBook for getting started with Phi-3Learn more about what you can do in Azure AI Studio 

Read the whole story
alvinashcraft
13 minutes ago
reply
West Grove, PA
Share this story
Delete

IoT Coffee Talk: Episode 208 - 4th Year Anniversary!!

1 Share
From: Iot Coffee Talk
Duration: 1:02:52

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week, Rob, Jan, Dimitri, Stephanie, Pete, Leonard, Steve, and Marc jump on Web3 to celebrate the 4th ANNIVERSARY of IOT COFFEE TALK and to talk about:

* BAD KARAOKE: "Beat It", Michael Jackson (with Steve Lukather & Eddie Van Halen)
* The IoT Coffee Talk origin story
* IoT Coffee Talk is the necromancer of dead IoT technologies!
* A trip down memory road - the first 10 episodes of IoT Coffee Talk
* GenAI shaming
* Why YouTube constantly picked photos of Stephanie for our YouTube thumbnails
* Rick's prediction that IoT Coffee Talk would die after a few weeks.... uh.
* Generative AI and Copilots/Duos, etc. - the enterprise cybersecurity nuke
* What would education look like with generative AI? Bueno or no bueno?
* Congrats to Rob with Caroline's graduation!
* Getting on the Gartner Magic Quadrant
* Congrats to Charlie Key and Losant for making it!
* How Jan got hooked on IoT Coffee Talk and became and IoT Coffee Talker
* IoT Coffee Talk's Clubhouse misadventure

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Our Kids. Just make a minimum donation to www.elevateourkids.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on BuzzSprout, like, subscribe, and share: https://iotcoffeetalk.buzzsprout.com

Read the whole story
alvinashcraft
14 minutes ago
reply
West Grove, PA
Share this story
Delete

Open Core Open Source with Mermaid Chart's Knut Sveidqvist

1 Share

This week we talk to Knut Sveidqvist who brings over 20 years of software expertise to the table. Knut is the creator of the award-winning Mermaid open-source project, but he’s also the CTO at Mermaid Chart, the powerful JavaScript-based diagramming and charting tool that is building their business on an Open Core Business Model.





Download audio: https://r.zen.ai/r/cdn.simplecast.com/audio/24832310-78fe-4898-91be-6db33696c4ba/episodes/08b8db92-d1bd-4560-9188-009facae79ed/audio/d1bc59db-40fa-4021-93ac-1f8a002c35da/default_tc.mp3?aid=rss_feed&feed=gvtxUiIf
Read the whole story
alvinashcraft
14 minutes ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories