In this article, we look at a SQL Server stored procedure that can be used for the DBSCAN algorithm for data analysis.
The post DBSCAN Algorithm in SQL Server appeared first on MSSQLTips.com.
Posted by Sandhya Mohan, Product Manager
Gratitude is a mental wellness Android app that encourages self-care and positivity with techniques like in-app journaling, affirmations, and vision boards. These mindfulness exercises need to be free from performance bottlenecks, bugs, and errors for the app to be truly immersive and helpful—but researching solutions and debugging code took away valuable time from the team experimenting on new features. To find a better balance, Gratitude used Gemini in Android Studio to help improve the app’s code and streamline the development process, enabling the team to implement those exciting new features faster.Gratitude’s AI image generation feature, built in record time with the help of Gemini in Android Studio
Unlocking new efficiencies with Gemini in Android Studio
The Gratitude team decided to try Gemini in Android Studio, an AI assistant that supports developers throughout all stages of development, helping them be more productive. Developers can ask Gemini questions and receive context-aware solutions based on their code. Divij Gupta, senior Android developer at Gratitude, shared that the Gratitude team needed to know if it was possible to inject any object into a Kotlin object class using Hilt. Gemini suggested using an EntryPoint to access dependencies in classes where standard injection isn’t possible, which helped solve their “tricky problem,” according to Divij.
Gemini eliminated the need to search for Android documentation as well, enabling the Gratitude team to learn and apply their knowledge without having to leave Android Studio. “Gemini showed me how to use Android Studio's CPU and memory profilers more effectively,” recalled Divij. “I also learned how to set up baseline profiles to speed up cold starts.”
Experimenting with new features using Gemini in Android Studio
Gemini in Android studio helped the Gratitude team significantly improve their development speed and morale. “This faster cycle has made the team feel more productive, motivated, and excited to keep innovating,” said Divij. Developers are able to spend more time ideating and experimenting on new features, leading to innovative new experiences.
One feature the developers built with their new found time is an image generation function for the app’s vision boards feature. Users can now upload a photo with a prompt, and then receive an AI-generated image that they can instantly pin to their board. The team was able to build the UI using Gemini in Android Studio’s Compose Preview Generation — allowing them to quickly visualize their Jetpack Compose code and craft the pixel-perfect UI their designers intended.
Going forward, the Gratitude team is looking forward to using Gemini to implement more improvements to its code, including correcting glitches, memory leaks, and improving performance based on more insights from Gemini, which will further improve user experience.
Build with Gemini in Android Studio
In March, Amazon Web Services (AWS) became the first cloud service provider to deliver DeepSeek-R1 in a serverless way by launching it as a fully managed, generally available model in Amazon Bedrock. Since then, customers have used DeepSeek-R1’s capabilities through Amazon Bedrock to build generative AI applications, benefiting from the Bedrock’s robust guardrails and comprehensive tooling for safe AI deployment.
Today, I am excited to announce DeepSeek-V3.1 is now available as a fully managed foundation model in Amazon Bedrock. DeepSeek-V3.1 is a hybrid open weight model that switches between thinking mode (chain-of-thought reasoning) for detailed step-by-step analysis and non-thinking mode (direct answers) for faster responses.
According to DeepSeek, the thinking mode of DeepSeek-V3.1 achieves comparable answer quality with better results, stronger multi-step reasoning for complex search tasks, and big gains in thinking efficiency compared with DeepSeek-R1-0528.
Benchmarks | DeepSeek-V3.1 | DeepSeek-R1-0528 |
---|---|---|
Browsecomp | 30.0 | 8.9 |
Browsecomp_zh | 49.2 | 35.7 |
HLE | 29.8 | 24.8 |
xbench-DeepSearch | 71.2 | 55.0 |
Frames | 83.7 | 82.0 |
SimpleQA | 93.4 | 92.3 |
Seal0 | 42.6 | 29.7 |
SWE-bench Verified | 66.0 | 44.6 |
SWE-bench Multilingual | 54.5 | 30.5 |
Terminal-Bench | 31.3 | 5.7 |
DeepSeek-V3.1 model performance in tool usage and agent tasks has significantly improved through post-training optimization compared to previous DeepSeek models. DeepSeek-V3.1 also supports over 100 languages with near-native proficiency, including significantly improved capability in low-resource languages lacking large monolingual or parallel corpora. You can build global applications to deliver enhanced accuracy and reduced hallucinations compared to previous DeepSeek models, while maintaining visibility into its decision-making process.
Here are your key use cases using this model:
As I mentioned in my previous post, when implementing publicly available models, give careful consideration to data privacy requirements when implementing in your production environments, check for bias in output, and monitor your results in terms of data security, responsible AI, and model evaluation.
You can access the enterprise-grade security features of Amazon Bedrock and implement safeguards customized to your application requirements and responsible AI policies with Amazon Bedrock Guardrails. You can also evaluate and compare models to identify the optimal model for your use cases by using Amazon Bedrock model evaluation tools.
Get started with the DeepSeek-V3.1 model in Amazon Bedrock
If you’re new to using the DeepSeek-V3.1 model, go to the Amazon Bedrock console, choose Model access under Bedrock configurations in the left navigation pane. To access the fully managed DeepSeek-V3.1 model, request access for DeepSeek-V3.1 in the DeepSeek section. You’ll then be granted access to the model in Amazon Bedrock.
Next, to test the DeepSeek-V3.1 model in Amazon Bedrock, choose Chat/Text under Playgrounds in the left menu pane. Then choose Select model in the upper left, and select DeepSeek as the category and DeepSeek-V3.1 as the model. Then choose Apply.
Using the selected DeepSeek-V3.1 model, I run the following prompt example about technical architecture decision.
Outline the high-level architecture for a scalable URL shortener service like bit.ly. Discuss key components like API design, database choice (SQL vs. NoSQL), how the redirect mechanism works, and how you would generate unique short codes.
You can turn the thinking on and off by toggling Model reasoning mode to generate a response’s chain of thought prior to the final conclusion.
You can also access the model using the AWS Command Line Interface (AWS CLI) and AWS SDK. This model supports both the InvokeModel
and Converse
API. You can check out a broad range of code examples for multiple use cases and a variety of programming languages.
To learn more, visit DeepSeek model inference parameters and responses in the AWS documentation.
Now available
DeepSeek-V3.1 is now available in the US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Europe (London), and Europe (Stockholm) AWS Regions. Check the full Region list for future updates. To learn more, check out the DeepSeek in Amazon Bedrock product page and the Amazon Bedrock pricing page.
Give the DeepSeek-V3.1 model a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.
— Channy