Principal Software Engineer at Allscipts in Malvern, Pennsylvania, Microsoft Windows Dev MVP, Husband, Dad and Geek.
50051 stories
·
22 followers

This Week in Programming: AWS re:Invent for Developers

2 Shares

Another week, another massive conference with dozens of announcements piled one on top of another. This week, Amazon took over Las Vegas with its AWS re:Invent conference and there was no lack of news. Of course, not all of it is necessarily applicable to you, the developer, so we figured we’d make an attempt, at the very least, of gathering together the news you might care to read. So, that’s where we’ll start this week, with several articles written right here at The New Stack, before we take a look at some other news in the world of programming that may have gone unnoticed with all the hubbub!

This Week in AWS re:Invent

  • Quantum Computing-as-a-Service: AWS its joining Microsoft in offering a fully-managed quantum computing-as-a-service, which goes by the name of Braket and will act as a marketplace for three different quantum computing hardware vendors. Each hardware provider offers a different architecture, and all three are supported in a single developer environment, which employ Jupyter notebooks and output into Amazon S3.
  • Machine Learning Comes to Code Review: I’ve said it before and I’ll say it again – if you want to see code doing interesting stuff, just look for developers trying to make their own lives easier. Such may be the case with Amazon’s new CodeGuru machine learning service for automated code reviews and application performance recommendations, which we write “is one of the first ML-based code reviewing and profiling tools.” The tool arose from Amazon’s own internal code review process and brings data from more than 10,000 open source projects on GitHub, making it able to “pinpoint resource leaks, atomicity violations, potential concurrency race conditions, unsanitized inputs, and wasted CPU cycles and the difficult-to-pinpoint on thread-safe classes, among other gotchas.”
  • An IDE Specifically for Machine Learning: Amazon has followed up on its Sagemaker platform, announced two years ago, with the launch of Sagemaker Studio, an IDE to manage the full machine learning lifecycle that we write “includes a number of additional capabilities, including debugging, monitoring and even the automatic creation of ML models.” Sagemaker Studio offers that “single pane of glass” experience where developers can build, train, tune, and deploy their ML models. AWS went way beyond just launching an IDE, also releasing several more tools, including SageMaker Experiments, SageMaker Processing, a new debugger, the SageMaker Model Monitor, and  SageMaker Autopilot — so make sure to check out our full story for all the ML-goodness details.
  • Deep Learning For Java: While we’re talking about learning, AWS also introduced an open source library to develop, train and run deep learning models in Java using high-level APIs called the Deep Java Library (DJL). The DJL, they write, “will simplify the way you train and run predictions” and they offer a walk-through on “how to run a prediction with a pre-trained Deep learning model in minutes.” As for why they made the DJL, they say that there is a dearth of resources for Python, but little for Java… hence, the DJL.
  • Machine Learning for Music Composition: Following up on previous ML-learning tools, such as AWS DeepLens and AWS DeepRacer, this time around Amazon unveiled AWS DeepComposer, which lets you compose music with generative machine learning models. AWS DeepComposer is “a 32-key, 2-octave keyboard designed for developers to get hands on with Generative AI, with either pretrained models or your own,” which they say is the “world’s first machine learning-enabled musical keyboard.”
  • Java and .NET Support in The AWS CDK: Amazon’s AWS Cloud Development Kit (CDK) now offers support for Java and .NET. Previously, the AWS CDK supported TypeScript and Python to model and provision your cloud application resources through AWS CloudFormation.
  • AWS Frameworks for Mobile Devs: Finally, Amazon launched both Amplify iOS and Amplify Android, which are open source libraries to help developers “build scalable and secure mobile applications” and “easily add analytics, AI/ML, API (GraphQL and REST), datastore, and storage functionality to your mobile and web applications.”

This Week in Programming

  • A Tale of Two Rust IDE Supports: IDEs currently have two choices when it comes to supporting Rust — the Rust Language Server (RLS) and rust-analyzer. While they differ primarily in performance, the Rust IDE team sees their silo’ed development as a bit of an unfortunate situation, writing that “we’d like to change that and to find out how we can unify these efforts.” In analyzing the current status, they say that the rust-analyzer offers greater performance with a “somewhat richer feature-set” while RLS offers “precision”. The Inside Rust blog post dives in a bit deeper, of course, but long story short is that they pose that “it is possible to integrate both approaches without doubling the engineering effort” and that “if this approach works, we will consider freezing RLS and focusing fully on rust-analyzer.”
  • Be Heard About Rust: If this sort of sausage making is of interest to you, or even if not and you just have opinions about the future of Rust, then head on over to the 2019 State of Rust Survey and let them know what you think. The Rust team wants to “understand its strengths and weaknesses and establish development priorities for the future” and this is your chance to partake. The survey should take about 10-15 minutes, is optionally anonymous, and open until Dec. 16. Keep an eye out for the results in a month.
  • Microsoft’s “Rust-like” Programming Language: While we’re on the topic of Rust, have you heard about Microsoft’s new language, which currently goes by the name “Project Verona”? According to an article in ZDNet, Project Verona is a new Rust-based programming language for secure coding that comes out of the company’s desire to “make older low-level components in Windows 10 more secure by integrating Mozilla-developed Rust.” Not too long ago, a Microsoft employee made some news by disclosing that “the vast majority of bugs being discovered these days are memory safety flaws.” Well, now the company is looking to fix that with Rust, which employs “memory safety” and Microsoft researcher Matthew Parkinson gave a talk on the company’s recent approach. Project Verona is Microsoft’s attempt at rewriting some specific Rust components to better handle these memory safety issues while preserving performance, and Parkinson has said it will be made open source “soon” so keep an eye out.
  • JetBrains Launches Into Space: While AWS re:Invent was going on in Las Vegas, Kotlin was also having its own conference where JetBrains introduced new developer collaboration tool called Space that provides git-based version control, code review, automation (CI/CD) based on Kotlin Scripting, package repositories, planning tools, an issue tracker and more. Space is currently available in early access and will be available as a service or self-hosted.

Feature image: AWS CEO Andy Jassy

The post This Week in Programming: AWS re:Invent for Developers appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
sbanwart
21 hours ago
reply
Akron, OH
Share this story
Delete

Eliminating toil with fully automated load testing

2 Shares

Introduction

In 2013, when LinkedIn moved to multiple data centers across the globe, we needed a way to redirect traffic from one data center to another in order to mitigate potential member impact in the event of a disturbance to our services. This need led to the birth of one of the most important pieces of engineering at LinkedIn, called TrafficShift. It provides the ability to move live production traffic from one data center to another in an effortless manner. 

As we evolved with new services and saw exponential growth in traffic, keeping the site up remained critical in order to serve our members. It is our job as SREs to ensure the member experience is consistent and reliable. To do that, we need to make certain that our data centers are able to handle the growing demand, while simultaneously being prepared and having the ability to deal with an unexpected disaster scenario. Therefore, we incorporated load testing as a part of our daily operations work so that we can take a proactive approach to identifying our capabilities. Load testing not only helps us to identify the maximum operating capacity of our services, but also highlights any bottlenecks in our process and helps us determine if any services are degrading. 

To provide a bit of context, load testing is the practice of targeting a server with simulated HTTP traffic in order to measure capacity and performance of the system. At LinkedIn, we achieve the same effect by targeting our data centers with redirected live user traffic from other LinkedIn data centers to help us identify the maximum queries per second (QPS) a data center can handle. In this blog, we’ll discuss the evolution of our manual load testing process and our journey to fully automating that process. We’ll also review the major challenges we faced while trying to achieve automation, which eventually helped our Site Operations team work more efficiently and save hours from their days. 

LinkedIn traffic routing

Whenever a member navigates to https://www.linkedin.com in their browser, they connect to one of our PoPs (Point of Presence) using GeoDNS. These PoPs are the link between members and our data centers. If you’re not familiar with PoPs, think of them as miniature data centers consisting of IP virtual servers (IPVS) and Apache Traffic Servers (ATS) that act as the proxy between our members and data centers.

  • traffic-routing-architecture

Figure 1: Stickyrouting and LinkedIn traffic routing architecture 
 

From there, our Stickyrouting service assigns a primary data center and secondary data center for each member. This assignment takes place using a Hadoop job that runs in a regular interval and assigns each member a primary and secondary data center based on geographic distances between members and the data center, while also considering the capacity constraints of each data center. When a primary data center for a member is offline, the secondary data center is used in order to ensure there is no member impact.

A special cookie is used to route the member to their primary colo. This cookie is set by the Apache Traffic Servers (ATS). This cookie contains routing information that indicates a "bucket" within a data center. Buckets are subpartitions of a Stickyrouting partition and a member is assigned a bucket in a data center. In the event the cookie is expired, not available, or if the data center information that the cookie had is itself offline, then the Stickyrouting plugin talks to the backend to fetch information on which data center the member should be redirected to. Stickyrouting is therefore an important service that manages the mapping between member and data center mapping.

Manual load test approach

As mentioned earlier, load testing for us is a daily practice and, for a long time, was a rather manual process. It required setting a predefined amount of production traffic as the target queries per second (QPS) for the load test and then manually figuring out how much QPS needed to be moved from other data centers to the current data center. 

This was done by marking the corresponding Stickyrouting buckets in other data centers as offline and using Stickyrouting in a controlled way to ensure we didn’t cause a disturbance to the member experience. 

However, before all this comes into the picture, we defined a target QPS for the load test based on historic peak traffic trends and business initiatives planned for the future. We also defined another target for the total QPS while the test was running, to account for live traffic that’s not part of the load test. To expand on that concept, if x is our final QPS, we generally set the target QPS slightly lower, to somewhere near x-y QPS, to accommodate live traffic increments, where y is lower threshold. 

We also had a high watermark set, where we encouraged service owners to plan for some extra amount of QPS over the actual load test target. This gave us leverage to anticipate any upticks in site traffic that could happen during the load test. 

Furthermore, the engineer defines additional parameters, such as: 

  • Bucket groups that define the number of buckets that will be offlined in one jump to reach target QPS, 

  • Group intervals that define the time to wait after we jump to target QPS,

  • Bucket intervals that define the time interval between each action being performed on a bucket, and 

  • Duration of load test that specifies how long a load test needs to be performed. 

After the target QPS was reached, the engineer manually took control to reach the final QPS and sustain the target QPS for load test duration. 

During this time, the engineer would keep an eye on our monitoring dashboards to understand traffic levels and increase or decrease traffic flow to reach the target QPS. The engineer was also responsible for reviewing internal channels in the event that a service owner raised concerns, tracking any error notes, and understanding the latency of the overall site. In the event of an escalation, the engineer was also in charge of connecting with relevant SREs to review a potential issue. 

Given all of this was done with live traffic on a daily basis, it could be a nerve-racking experience and required a lot of manual effort. To address this, we explored how we could automate the load test process and save engineers from the 2-3 hours every day that a manual load test requires. 

Load test automation and challenges

We decided to tackle this problem in three stages: 

  • Stage one: ramp to 75% of the load test QPS.

  • Stage two: ramp to 90% of the load test QPS

  • Stage three: reach our load test target QPS. 

The idea was that the first two stages would ramp quickly and we’d need to spend more time carefully ramping traffic in the last stage to reach the target. Given the fact that our automation calculates the number of buckets to offline in the other data centers based on logic already defined, our assumption that the first two stages would ramp quickly proved to be true. 

It’s important to note we placed a sleep interval after the stage one and two ramps so that we could get an accurate report from our monitoring systems. We also considered the primary incoming traffic on the particular data center being load tested at these two stages to inform our decision to increase or decrease traffic. 

Once we completed stage two, we started trending in a more measured manner toward the target. In addition to having a high watermark of target QPS plus the threshold QPS, we introduced a lower watermark, which was the same threshold subtracted from the target QPS. This gave us a window of where we had to be in order to call the load test successful for a data center.

  • graph-showing-load-test-impact

Figure 2: A fully-automated load test being performed
 

Therefore, with our focus on these two watermarks, we came up with two small step functions. The first step function would increase the traffic, while the other function would decrease the traffic through the movement of buckets. Stage three made use of these two methods to detect the current traffic and make the decision. 

While trying to reach stage three, we needed to address the delay from our monitoring system pipeline, which caused our engineers to take less accurate decisions while shifting traffic from one data center to another. The delay was causing us to rely on outdated traffic metric numbers and this couldn't be fixed at the source itself. 

To fetch the most accurate traffic QPS possible, we decided to query the pipeline more frequently, and as we honed in on fetching the most accurate data, we eventually achieved the precise step algorithms. 

It was also important to make sure we could control and intervene at any point in the process. Therefore, we built functionalities such as “pause,” where an engineer could look into an alert or a concern from the SRE service owner, or “terminate,” which we could execute at any point in the load test.

Conclusion

Load testing for any organization, especially for those at such a large scale, can be a daunting task for an engineer, but it’s an essential part of our daily routine in assessing our ability to serve our members. As SREs, we strive to eliminate toil as much as we can, which not only helps the organization but also increases the productivity of our engineers and teams. This is why automating the load test process became an essential part of how we operate. 


Editor’s note: In case you missed the news, we’ve begun a multi-year journey to the public cloud with Microsoft Azure. Read more about our journey here.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
sbanwart
1 day ago
reply
Akron, OH
Share this story
Delete

How to Work with a Text Editor in an Angular 8 Application

1 Share
In this article, you will learn how to work with a text editor in an Angular 8 application.
Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Connect 4 with Electron

1 Share

Over the past few weeks I’ve been learning about ElectronJS (also known just as “Electron”), and wanted to write about my experiences and applications I built. In the process of learning, I built both an Angular and an Electron version of the classic game “Connect 4.”

The projects can be found at the following links:

I wrote both an Angular and an Electron version so that I could compare the two frameworks, and learn a little more about the underlying tooling in the process.

This post is going to cover some background about Electron, and walkthrough building a “Connect 4” game with it. I’m also going to do a brief discussion of Electron and Angular build implementations.

You can view a hosted version of the Angular version here, or watch a video of the Electron version in action:

What is Electron?

Alt Text

Electron is a framework that enables you to build Desktop applications with JavaScript.

Originally developed by GitHub, Electron uses Chromium and Node.js to build and package applications for desktop platforms. I was really impressed that a lot of applications that I already use are actually written with Electron! This includes VSCode and Atom.io!

Electron has really great documentation, and is an unopinionated framework. This means that you have the flexibility to build your Electron apps the way you want to (beyond some basic structure I’ll cover in the next section). Additionally, since Electron is JavaScript, it is not that difficult to convert frontend applications over to Electron. As part of my learning, I actually did this with an Angular application (more on this later).

To help with building Electron applications there are several CLI and boilerplate projects available. The quick-start app is a great place to start as you can can modify it easily to get up and running.

I also really liked working with electron-builderto build and package my application. If you do some googling, you’ll find that there are also several other tools including electron-packager that are good as well .

Finally, I also wanted to point out that if your team is already familiar with frontend technologies like JavaScript, CSS, and HTML then using electron is super intuitive. A lot of the skills web developers use everyday can be leveraged with Electron. You can even utilize bundling platforms like webpack to do even more cool things with your Electron applications.

How are Electron Applications structured?

So borrowing from the official docs, your application really only consists of the following:

your-app/
├── package.json
├── main.js
└── index.html
  • The package.json file obviously manages your projects dependencies, but also defines the main entry point of your application and (optionally) a build configuration.
  • The main.js file is where you define the application window behavior including size, toolbar menus, closing, icons, and a lot more.
  • The index.html page is the main presentation or “view” of your application. You can also pull in additional JavaScript libraries like you would with any other project.

From this basic setup, you can see how you could build out more complex applications. This setup is the bare minimum, and using basic HTML, CSS, and JavaScript you could build much bigger things with these building blocks.

You also obviously will need electron installed as a dependency or globally on your system to do builds etc. This can be installed easily with just a npm i electron .

In addition to your dependencies, the package.json file will need to minimally have the following (again copied and pasted from the docs):

{
  "name": "your-app",
  "version": "0.1.0",
  "main": "main.js",
  "scripts": {
    "start": "electron ."
  }
}

Notice the “main” entry in the file, this identifies the location of your main.js file. This is fairly similar to the way that ExpressJSdoes this with an index.js file.

Also note if you’re using electron-builder you’ll want to define a build configuration. You can avoid this by just using their CLI, either way the docs here will get you started.

In the main.js file (again copying from the docs), you typically would have a setup that looks like this:

const { app, BrowserWindow } = require('electron')

function createWindow () {
  // Create the browser window.
  let win = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
      nodeIntegration: true
    }
  })

  // Open the DevTools.
  win.webContents.openDevTools()

  // and load the index.html of the app.
  win.loadFile('index.html')
}

app.on('ready', createWindow)

What’s this code doing? Well first, you basically instantiate the application, and then its defining window behaviors. The createWindow method defines what the actual application will do as handled by the OS. Notice that you have to define how the window is closed, and that you need to load the index.html file.

Notice also this small section:

// Open the DevTools.
win.webContents.openDevTools()

Is that the same Chrome DevToolsthat we know and love? Why yes it is! Since Electron leverages the same internals that Chrome does for web applications, you can actually run DevTools and debug your Electron application the same way you would a web app with Chrome.

Additionally, this basic setup in the main.js file can be tuned for processes for Mac, Windows, and Linux platforms. An example being on Mac you normally would “quit” an application instead of just closing the window.

To complete your Electron app, you’d have a corresponding index.html file that looks like the following:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Hello World!</title>
    <!-- https://electronjs.org/docs/tutorial/security#csp-meta-tag -->
    <meta http-equiv="Content-Security-Policy" content="script-src 'self' 'unsafe-inline';" />
  </head>
  <body>
    <h1>Hello World!</h1>
    We are using node <script>document.write(process.versions.node)</script>,
    Chrome <script>document.write(process.versions.chrome)</script>,
    and Electron <script>document.write(process.versions.electron)</script>.
  </body>
</html>

Notice that its just straight html. This is just like the old days when you had to manually build pages before frameworks like Angular or React. However, this is also super simple and you can imagine injecting custom components and other behaviors directly into your index.html page. If you’re familiar with the standard output from builders like webpack, then you can also see how easy it would be to reference the bundles and convert a frontend application to Electron.

I also left out things like the renderer.js file and the preload.js file which you typically will see in applications. These aren’t required to get started, but you see them in a lot of projects and can learn more about these options with the docs here.

The makers of Electron also have several nice examples you can review here as well.

Once you’ve got these basic files setup, you can start your application with electron . at the root directory of your project. For more on this, check out the getting started docs here.

How are Electron Apps Packaged?

As I mentioned in the previous section, once you’ve got your application up and running you can bundle your application with several different tools and utilities.

I found electron-builder super helpful. You just build your app similar to the quick-start I just was referencing, and then add electron-builder as an NPM dependency to your project.

The other buildersthat are available have similar configurations, but the basic idea is to compile your JavaScript, CSS, and HTML into binaries for the different platforms. For Mac you’d have a DMG or .app file. Windows would have a .exe file etc. The resulting binaries could then be signed and distributed via the normal platforms like iOS app store or other deployment option.

For my “Connect 4” app, I used electron-builder and defined a “build” configuration in my package.json file like the following:

"build": {
  "appId": "connect_4_with_electron",
  "mac": {
    "category": "public.app-category.entertainment"
  }
}

In addition to this setup, I also used the electron-builder CLI to create the packaged versions of my application.

Between the two of them, I actually favored the CLI because it requires the least amount of configuration. I think that ultimately, whichever one you chooses is based on the requirements for your project.

Electron and Angular Builds

Alt Text

So all of this summary has brought us to the point of being able to discuss my “Connect 4” Electron app. You can go ahead and do a git clone of the project here. You can also refer to the Angular version of the project here.

The project itself basically follows the same convention has I’ve already walked through. The “sketch” or graphical part of the Connect 4 game board is done with P5JS.

The cool part is that my Electron implementation of the project is super similar to my Angular implementation of the same code.

The Electron project has the same main.js , index.html , and package.json as we’ve already discussed. The only real differences was that I had to follow some conventions of how P5JS sketches work (check out the docs for more). I also created a context menu, and did a few other small customizations.

Additionally, if you look in the main home-page-component.ts it will have a very similar structure to the sketch.js file that is in the Electron app. I’m not going to go into how P5JS renders images, but you can compare these two sections of the projects and understand how similar they are.

What I really wanted to highlight, however, was just how similar the code is. I’m just using Angular here since I’m a fan, but you can theoretically do this for any of the main frontend frameworks. The biggest thing is just understanding how the apps are bundled with a central index.html file and supporting code “chunks” and CSS styles.

Both Angular and Electron are composed of JavaScript, CSS, and HTML that bundles to form the application. The Angular CLI creates a bundle with webpack that can be deployed. Electron relies on the JavaScript, CSS, and HTML to render its application, and uses builders to package binaries for distribution.

You can really see the similiarties when you compare the Angular bundle generated by the CLI and webpack with the basic Electron application structure.

In the Angular implementation of my “Connect 4” game, the final bundle looks like the following:

.
├── assets
│   └── favicon.ico
├── favicon.ico
├── index.html
├── main-es2015.js
├── main-es2015.js.map
├── main-es5.js
├── main-es5.js.map
├── polyfills-es2015.js
├── polyfills-es2015.js.map
├── polyfills-es5.js
├── polyfills-es5.js.map
├── runtime-es2015.js
├── runtime-es2015.js.map
├── runtime-es5.js
├── runtime-es5.js.map
├── styles-es2015.js
├── styles-es2015.js.map
├── styles-es5.js
├── styles-es5.js.map
├── vendor-es2015.js
├── vendor-es2015.js.map
├── vendor-es5.js
└── vendor-es5.js.map

Now compare this to the structure of the Electron version of the “Connect 4” application (before being packaged obviously):

.
├── LICENSE
├── README.md
├── dist
├── icon.icns
├── index.html
├── main.js
├── node_modules
├── package-lock.json
├── package.json
├── preload.js
├── renderer.js
├── sketch.js
└── style.css

Its not that hard to see how you could easily take the build created from the Angular project and build an Electron app from it. You really would just need to pull in the main.js , preload.js, and renderer.js files and make them reference the associated bundles from the Angular CLI and webpack. This isn’t really a simple task, and would require some testing etc. but the I just wanted to point out the basic building blocks are there.

I was actually able to do this with a different project. It did require quite a bit of googling and learning about the different builders. The issue really wasn’t with converting the project, but rather just in understanding how to correctly generate the binaries I wanted. If you want to do something like this, I recommend you go incrementally and take your time. It’s easier to do with a smaller project first.

Closing Thoughts

I hope you’ve enjoyed this post, and it’s been some help in getting a background with Electron. I recommend checking out my projects on GitHub for reference.

In general, I’ve had a good experience working with the platform and building applications. I think it’s really cool that you can leverage frontend skills to build desktop applications. I also really liked the documentation, and large amount of information available on working with Electron. It was fairly easy to get up and running overall.

Also, when you’re ready to package and deploy I highly recommend electron-builder and its associated CLI. They made building electron applications easier, and overall were very good to work with.

Follow me on Twitter at @AndrewEvans0102!

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Azure Advent Calendar – Week 1 recap

1 Share

Week 1 of the Azure Advent Calendar has come and gone and we have seen some incredible content.

Content covered includes: –

An Azure Poem, Azure Governance, Azure Logic Apps, Azure Service Health, Azure Container Instance, Azure Devops Pipelines, Azure NetApp Files, Azure Certification Paths, Azure AKS, Azure API Manangement, Azure Lighthouse, Azure Site Recovery, Azure Functions, Azure WebApps, Azure MFA, Azure Role Based Certification,  Being Successful in Azure, Azure Migrate, Azure Key Vault, AKS monitoring with Prometheus and Terraform for Azure.

Phew that’s a lot to learn about in just 1 week, there is a lot more to come so please subscribe to our dedicated YouTube Channel 

So far we have over 700 subscribers, and there has been over 350 hours of videos watched which is absolutely awesome.

The Azure Advent Calendar website has been view in over 120 countries around the globe and had almost 6 thousand hits in the last 90 days.

We wanna take this time to thank everyone for taking part and hope that everyone is enjoying the #azureadventcalendar so far, we appreciate all of the tweets, LinkedIn coverage etc its been a blast so far, loving all the Christmas jumpers on show etc.

Thanks all from Gregor and Richard aka @Pixel_Robots

 





Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Microsoft Graph presence APIs are now available in public preview

1 Share

Today, we’re excited to announce the preview of Microsoft Graph presence APIs. You can use these APIs to:

You can access these APIs using user delegated permissions with admin consent.

An image showing the architecture of the presence APIs.

Presence APIs return two key parameters: availability and activity.

  • Availability is basic presence information that returns values such as Available, Busy, Away, and so on.
  • Activity is supplemental information about a user’s availability, such as InACall, InAMeeting, OutofOffice, and so on.

For a list of the possible values that are returned, see the presence resource topic.
These values are aligned with the Teams presence states.

Next steps

Try the APIs using Microsoft Graph Explorer.

If you have any feedback about or suggestions for these APIs, please let us know via User Voice (under Cloud Communications).

Happy coding!

The post Microsoft Graph presence APIs are now available in public preview appeared first on Microsoft 365 Developer Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories