Developers are doing more testing, according to a recent JetBrains report on the State of Developer Ecosystems.
The percentage of developers who test has gone up from 85% last year to 95% in 2024. The proportion of developers who are doing unit tests, integration tests and end-to-end tests also rose.
However, only 18% are using artificial intelligence in the testing software they use.
The survey also looked at whether AI provides people with more time to code. Users overwhelmingly say that saving time and doing things faster are the top benefits of using AI tools for development.
Sixty-five percent said they spend more than half of their work time coding, up from 57% in 2023. Half of those that use these tools save at least 2 hours a week. In contrast, 4% say they don’t save any time per week due to using these tools, and another 46% save no more than 2 hours a week.
It’s worth noting that only 23% say using AI tools for coding actually improves the quality of the code and solutions being created.
It also seems that previous estimates of GitHub Copilot use may have been overstated.
In 2024, JetBrains asked specifically if people had used specific AI tools for coding and other development activities. When asked this way instead of about use for any purposes, GitHub Copilot usage fell from 46% to 26% and ChatGPT use fell from 70% to 49%.
The above section was written by Lawrence Hecht, TNS Analyst.
Weaviate Offers Hosted Embedded Service for AI Applications
Vector database company Weaviate launched a new hosted embedding service for AI applications this month. Called Weaviate Embeddings, the service supports both open source and proprietary embedding models. It gives developers full control over their embeddings, allowing them to switch between models. Also, it does not have a rate limit on embeddings per second in production environments.
The service is hosted in Weaviate Cloud and runs on GPUs.
Tabnine Feature Flags Unlicensed Code in AI-Generated Software
Tabnine, creator of the original AI code assistant, introduced a feature called Code Provenance and Attribution that checks AI-generated code to see if there are potential IP or copyright issues with the code.
It checks the code against publicly visible GitHub code and flags any matches. The code checker references the source repository, as well as the license type, which makes it easy for a developer to determine if it can be used based on the organization’s specific standards and requirements.
Tabnine soon expects to add the capability to allow users to identify specific repos, such as those maintained by competitors, and then have Tabnine check generated code against them as well. It also plans to add censorship capability, allowing Tabnine administrators to remove matching code before it is displayed to the developer.
Right now, Code Provenance and Attribution are in private preview and open to any Tabnine enterprise customs. It works on all available models.
Google Launches Gemini 2.0 Flash and Javascript/Python Code Assistant
Google has updated its Gemini Flash model. Gemini Flash 2.0 is twice as fast as 1.5 Pro, the company said. It also introduced Multimodal live API for building dynamic applications with both real-time audio and video streaming, according to the blog post.
Developers can use Gemini 2.0 Flash to generate responses that can include text, audio and images through an API call. Gemini 2.0 Flash can be accessed using the Gemini API in Google AI Studio and Vertex AI. Right now it’s experimental, but general availability is expected next year.
Gemini 2.0 is trained to use tools, which Google noted is a foundational capability for building AI agentic “experiences.” It can natively call tools like Google Search and code execution in addition to custom third-party functions via function calling.
Using Google Search natively as a tool leads to more factual and comprehensive answers and increases traffic to publishers, the post added.
“Multiple searches can be run in parallel leading to improved information retrieval by finding more relevant facts from multiple sources simultaneously and combining them for accuracy,” the post stated.
Google also introduced an experimental AI-powered code agent called Jules, which can handle Python and Javascript coding tasks.
“Working asynchronously and integrated with your GitHub workflow, Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build,” the post stated. “Jules creates comprehensive, multistep plans to address issues, efficiently modifies multiple files, and even prepares pull requests to land fixes directly back into GitHub.”
Right now Jules is available for a “select group of trusted testers,” but plans are to make it available for other developers in early 2025.
Finally, there is a trusted tester program developers can join to try out the Colab data science agent. It allows developers to describe their analysis goals in plain language, and then it builds a Colab notebook. It’s expected to be more widely available in the first half of 2025.
The post Developers Testing More, JetBrains Study Finds appeared first on The New Stack.