Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
121649 stories
·
29 followers

Tech Hiring: Most Employers Added Jobs or Kept the Status Quo in 2023

1 Share
closeup photo of players on a foosball table. Most IT hiring managers said they either added positions to their tech staffs or maintained the status quo in 2023.

Despite massive layoffs in this post-pandemic era by major tech companies, a new survey shows that most organizations overall didn’t cut their technical staff in 2023 — 37% of survey participants said they added tech employees in 2023, while 34% maintained the status quo in terms of tech hiring, and only 29% in aggregate eliminated technical headcount last year.

Cloud providers reported the biggest increases in 2023 (59% said they would add tech staff). Meanwhile, hardware and software companies were more sensitive to the economic cycle, as only 28% increased tech staff, compared to 37% who decreased it.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

The Future of SQL: Conversational Hands-on Problem Solving

1 Share
databases

If you’re returning to SQL after a long absence, as I did a few years back, there are important changes to know about. First, JSON. Many SQL-oriented databases now support JSON columns for arbitrary tree-structured data. Second, common table expressions (CTEs) that you can use to express a complex query as a pipeline of steps that are simple to understand and verify.

The JSON features can be confusing, for example in Steampipe queries like this one which implicitly joins the table github_my_gist with the expansion of its JSON column files.

select
  file ->> 'language' as language,
  count(*)
from
  github_my_gist g,
  jsonb_array_elements(g.files) file
group by
  language
order by
  count desc;


Exhibit A

The query counts GitHub gists by language, and produces output like this.

| language    | count |
|-------------|-------|
| Python      | 15    |
| Markdown    | 34    |
| JavaScript  | 7     |
| null        | 7     |


Here’s a different version of the query that produces the same result.

-- cte 1 to unnest the json

with expanded_files as (
    select
        g.id as gist_id,
        jsonb_array_elements(g.files) AS file
    from
        github_my_gist g
),

-- sample cte 1 output

-- | gist_id | file                         |
-- |---------|------------------------------|
-- | 1       | {"language": "Python"}       |
-- | 2       | {"language": "Markdown"}     |
-- | 3       | {"language": "JavaScript"}   |
-- | 4       | {"language": "Python"}       |


-- cte 2 to extract the language

languages AS (
    select
        file ->> 'language' as language
    from
        expanded_files
)

-- sample cte 2 output

-- | language    |
-- |-------------|
-- | Python      |
-- | Markdown    |
-- | JavaScript  |
-- | Python      |

-- final phase to count languages

select
    language,
    count(*) as count
from
    languages
group by
    language
order by
    count desc;

-- sample final output

-- | Python     | 2 |
-- | Markdown   | 1 |
-- | JavaScript | 1 |


Exhibit B

Levels of Expertise

If you’re well versed in set-returning JSON functions like Postgres’ jsonb_array_elements, which converts a JSON list into a set of rows, and if you can visualize how that transformation interacts with joins, you can craft powerful queries like exhibit A very concisely.

That economy of expression can be good for experts, but newcomers can struggle to mentally unroll the implied steps of the transformation. By “newcomer” I do not mean novice; I mean not yet an expert in this combination of disciplines. (That includes me, by the way, despite years of engagement with SQL at this level.)

From that perspective, you might want to see the steps spelled out explicitly, as in exhibit B. Creating versions of exhibit B is something I do in our support channel, and wanted to do more easily. So I made a simple GPT for that — and when I say “simple GPT,” I mean something like a simple bash script: a quickly built tool that can save more time and/or effort than it costs to build it.

Because it’s broken down into a pipeline of checkable steps, exhibit B is easier to debug, use confidently and revise safely. You could then collapse it down to exhibit A, which might be more efficient, though that’s not necessarily true.

You could even make both versions available, so experts and nonexperts can each see through their preferred lenses. Arguably that’s another form of accessibility, along with everything else that we mean by accessibility.

Here’s the prompt for this GPT. I used it to ask for the names of the issue templates for a given repo, given this schema and an issue_templates column that looks like this:

+------------------------------------------------------------------------------------------------------------------
| issue_templates
+------------------------------------------------------------------------------------------------------------------
| [
|     {
|         "body": "**Describe the bug**\nA clear and concise description of what the bug is.\n\n**Steampipe version
|         "name": "Bug report",
|         "about": "Create a report to help us improve",
|         "title": "",
|         "filename": "bug_report.md"
|     },
|     {
|         "body": "**Is your feature request related to a problem? Please describe.**\nA clear and concise descript
|         "name": "Feature request",
|         "about": "Suggest an idea for this project",
|         "title": "",
|         "filename": "feature_request.md"
|     }
| ]
+------------------------------------------------------------------------------------------------------------------


In that context, I literally just asked for a query that (per above) lists the names of issue templates for a given repo. Here’s the concise version of the generated (and documented) query.

SELECT
  template ->> 'name' AS template_name
FROM
  github_repository,
  jsonb_array_elements(issue_templates) AS template
WHERE
  full_name = 'your-repository-full-name';  -- Replace 'your-repository-full-name' with the actual full name of your repository


And here’s the expanded version.

-- CTE to expand the JSON array of templates
WITH expanded_templates AS (
    SELECT
        r.id AS repo_id,
        jsonb_array_elements(r.issue_templates) AS template
    FROM
        github_repository r
    WHERE
        r.full_name = 'your-repository-full-name'  -- Replace 'your-repository-full-name' with the actual full name of your repository
),

-- Sample data after CTE 1
-- | repo_id | template                                |
-- |---------|-----------------------------------------|
-- | 1       | {"name": "Bug report", ...}             |
-- | 1       | {"name": "Feature request", ...}        |

-- CTE to extract the template names
template_names AS (
    SELECT
        template ->> 'name' AS template_name
    FROM
        expanded_templates
)

-- Sample data after CTE 2
-- | template_name    |
-- |------------------|
-- | Bug report       |
-- | Feature request  |

-- Final selection
SELECT
    template_name
FROM
    template_names;


This approach works well enough for simple cases like these, but not so much for more complex ones like this one, which finds S3 buckets with policies that grant external access. In that situation, you don’t just need Postgres knowledge: You also need to know how AWS policies are constructed, and then you need to work out how to use Postgres joins and JSONB operators to query them. If the GPT initially fails to do that for you, that isn’t the end of the story. Having provided a description of the result you want, along with the schema for a table and a sample of a required JSON column, you’ve set up a context for a conversation with an entity that has seen vastly more SQL patterns and AWS policy patterns than you ever will.

Conversational Hands-on Learning

I keep returning to the theme of choral explanations (#4 on my list of best practices), and it’s especially relevant in the SQL domain where there are just so many ways to write a query.

Exploring the range of possibilities used to be arduous, time-consuming and hard to justify. Now it’s becoming hard to justify not doing that; optimizations (sometimes major ones) can and do emerge.

Arguably it has always required a kind of alien intelligence to grok SQL, not to mention query planners. In conversation with LLMs, we can now rapidly explore the space of possibilities and more easily evaluate how different approaches will perform. How else could I write this query? Why would I do it that way? How will the database handle it? (Maybe you can fluently read and understand query plans but I can’t, and I gratefully accept all the help I can get.)

I routinely ask LLMs these kinds of questions and receive answers that are not theoretical, but are versions of my query — working with my data — that I can immediately try, and that lead to follow-up questions that I can also explore cheaply.

Arguably it has always required a kind of alien intelligence to grok SQL, not to mention query planners.

In one test of my latest GPT, I wondered about translating Postgres idioms to SQLite. Postgres and SQLite JSON patterns are quite different. Holding both sets of patterns in your head, and mentally mapping between them, is only a means to an end. If I’m considering whether it’s feasible to switch databases, I don’t want to invest in a deep understanding of SQLite patterns that I wind up never needing. I just want to know what’s possible.

The GPT, which was nominally about Postgres, was happy to help. All you’re really doing with these GPTs is setting an initial context. At any point, you can steer the conversation wherever you want it to go.

Here’s the SQLite counterpart to the query that counts gists by language.

select
  json_extract(value, '$.language') as language,
  count(*) as count
from
  github_my_gist,
  json_each(github_my_gist.files)
group by
  language
order by
  count desc;


ChatGPT gave it to me instantly, I tested it and it worked. Of course, I then wanted to unroll this compact version to visualize the query in a step-by-step manner. It turns out that you can’t eliminate the join, as far as I can tell. Here is ChatGPT’s explanation.

json_each: This is SQLite’s equivalent to jsonb_array_elements, but it functions slightly differently. It must be used in the FROM clause and typically combined directly with the table it is extracting data from due to the less flexible nature of SQLite’s query planner regarding complex JSON operations.

Is that strictly accurate? I don’t know, but it comports with the behavior I’m seeing, which of course is behavior that ChatGPT enabled me to effortlessly conjure into being. This sort of conversational hands-on learning is the signal I’m following to cut through the noise and hype around AI.

Ultimately I don’t care about SQL or JSON; I want to climb up the ladder of cognition in order to solve problems that yield to data acquisition and analysis. I’m not blind to the dark patterns embodied in the most powerful LLMs, but I’m unable to ignore the boost they can deliver. Many kinds of work require us to reason over information at scale, and not just over your code and documentation, though that’s our focus here. I don’t want radiologists to rely solely on AI, but I do want them to consult entities that have seen far more X-rays and diagnostic interpretations than they ever will. In the realm of infotech, I want wranglers of code and data to make the best possible use of these new reasoning partners.

The post The Future of SQL: Conversational Hands-on Problem Solving appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Microsoft Joins Thorn and All Tech Is Human to enact strong child safety commitments for generative AI

1 Share

While millions of people use AI to supercharge their productivity and expression, there is the risk that these technologies are abused. Building on our longstanding commitment to online safety, Microsoft has joined Thorn, All Tech is Human, and other leading companies in their effort to prevent the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. Today, Microsoft is committing to implementing preventative and proactive principles into our generative AI technologies and products.

This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to collectively tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children. The principles also align to and build upon Microsoft’s approach to addressing abusive AI-generated content. That includes the need for a strong safety architecture grounded in safety by design, to safeguard our services from abusive content and conduct, and for robust collaboration across industry and with governments and civil society. We have a longstanding commitment to combating child sexual exploitation and abuse, including through critical partnerships such as the Tech Coalition and the WeProtect Global Alliance. These principles will support us as we take forward our comprehensive approach.

As a part of this Safety by Design effort, Microsoft commits to take action on these principles and transparently share progress regularly. Full details on the commitments can be found on Thorn’s website here and below, but in summary, we will:

  • DEVELOP: Develop, build and train generative AI models to proactively address child safety risks
  • DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.
  • MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks

Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.

We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.

We look forward to partnering across industry, civil society, and governments to take forward these commitments and advance safety across different elements of the AI tech stack. Information-sharing on emerging best practices will be critical, including through work led by the new AI Safety Institute and elsewhere.

Our full commitment

DEVELOP: Develop, build, and train generative AI models that proactively address child safety risks

  • Responsibly source our training datasets, and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM): This is essential to helping prevent generative models from producing AI generated child sexual abuse material (AIG-CSAM) and CSEM. The presence of CSAM and CSEM in training datasets for generative models is one avenue in which these models are able to reproduce this type of abusive content. For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. We are committed to avoiding or mitigating training data with a known risk of containing CSAM and CSEM. We are committed to detecting and removing CSAM and CSEM from our training data, and reporting any confirmed CSAM to the relevant authorities. We are committed to addressing the risk of creating AIG-CSAM that is posed by having depictions of children alongside adult sexual content in our video, images and audio generation training datasets.
  • Incorporate feedback loops and iterative stress-testing strategies in our development process: Continuous learning and testing to understand a model’s capabilities to produce abusive content is key in effectively combating the adversarial misuse of these models downstream. If we don’t stress test our models for these capabilities, bad actors will do so regardless. We are committed to conducting structured, scalable and consistent stress testing of our models throughout the development process for their capability to produce AIG-CSAM and CSEM within the bounds of law, and integrating these findings back into model training and development to improve safety assurance for our generative AI products and systems.
  • Employ content provenance with adversarial misuse in mind: Bad actors use generative AI to create AIG-CSAM. This content is photorealistic, and can be produced at scale. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm’s way. The expanding prevalence of AIG-CSAM is growing that haystack even further. Content provenance solutions that can be used to reliably discern whether content is AI-generated will be crucial to effectively respond to AIG-CSAM. We are committed to developing state of the art media provenance or detection solutions for our tools that generate images and videos. We are committed to deploying solutions to address adversarial misuse, such as considering incorporating watermarking or other techniques that embed signals imperceptibly in the content as part of the image and video generation process, as technically feasible.

DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process

  • Safeguard our generative AI products and services from abusive content and conduct: Our generative AI products and services empower our users to create and explore new horizons. These same users deserve to have that space of creation be free from fraud and abuse. We are committed to combating and responding to abusive content (CSAM, AIG-CSAM, and CSEM) throughout our generative AI systems, and incorporating prevention efforts. Our users’ voices are key, and we are committed to incorporating user reporting or feedback options to empower these users to build freely on our platforms.
  • Responsibly host models: As our models continue to achieve new capabilities and creative heights, a wide variety of deployment mechanisms manifests both opportunity and risk. Safety by design must encompass not just how our model is trained, but how our model is hosted. We are committed to responsible hosting of our first-party generative models, assessing them e.g. via red teaming or phased deployment for their potential to generate AIG-CSAM and CSEM, and implementing mitigations before hosting. We are also committed to responsibly hosting third-party models in a way that minimizes the hosting of models that generate AIG-CSAM. We will ensure we have clear rules and policies around the prohibition of models that generate child safety violative content.
  • Encourage developer ownership in safety by design: Developer creativity is the lifeblood of progress. This progress must come paired with a culture of ownership and responsibility. We encourage developer ownership in safety by design. We will endeavor to provide information about our models, including a child safety section detailing steps taken to avoid the downstream misuse of the model to further sexual harms against children. We are committed to supporting the developer ecosystem in their efforts to address child safety risks.

MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks

  • Prevent our services from scaling access to harmful tools: Bad actors have built models specifically to produce AIG-CSAM, in some cases targeting specific children to produce AIG-CSAM depicting their likeness. They also have built services that are used to “nudify” content of children, creating new AIG-CSAM. This is a severe violation of children’s rights. We are committed to removing from our platforms and search results these models and services.
  • Invest in research and future technology solutions: Combating child sexual abuse online is an ever-evolving threat, as bad actors adopt new technologies in their efforts. Effectively combating the misuse of generative AI to further child sexual abuse will require continued research to stay up to date with new harm vectors and threats. For example, new technology to protect user content from AI manipulation will be important to protecting children from online sexual abuse and exploitation. We are committed to investing in relevant research and technology development to address the use of generative AI for online child sexual abuse and exploitation. We will continuously seek to understand how our platforms, products and models are potentially being abused by bad actors. We are committed to maintaining the quality of our mitigations to meet and overcome the new avenues of misuse that may materialize.
  • Fight CSAM, AIG-CSAM and CSEM on our platforms: We are committed to fighting CSAM online and preventing our platforms from being used to create, store, solicit or distribute this material. As new threat vectors emerge, we are committed to meeting this moment. We are committed to detecting and removing child safety violative content on our platforms. We are committed to disallowing and combating CSAM, AIG-CSAM and CSEM on our platforms, and combating fraudulent uses of generative AI to sexually harm children.

The post Microsoft Joins Thorn and All Tech Is Human to enact strong child safety commitments for generative AI appeared first on Microsoft On the Issues.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

OpenAI’s commitment to child safety: adopting safety by design principles

1 Share
We’re joining Thorn, All Tech Is Human, and other leading companies in an effort to prevent the misuse of generative AI to perpetrate, proliferate, and further sexual harms against children.
Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Meta’s Llama 3 models are now available in Amazon Bedrock

1 Share

Today, we are announcing the general availability of Meta’s Llama 3 models in Amazon Bedrock. Meta Llama 3 is designed for you to build, experiment, and responsibly scale your generative artificial intelligence (AI) applications. New Llama 3 models are the most capable to support a broad range of use cases with improvements in reasoning, code generation, and instruction.

According to Meta’s Llama 3 announcement, the Llama 3 model family is a collection of pre-trained and instruction-tuned large language models (LLMs) in 8B and 70B parameter sizes. These models have been trained on over 15 trillion tokens of data—a training dataset seven times larger than that used for Llama 2 models, including four times more code, which supports an 8K context length that doubles the capacity of Llama 2.

You can now use two new Llama 3 models in Amazon Bedrock, further increasing model choice within Amazon Bedrock. These models provide the ability for you to easily experiment with and evaluate even more top foundation models (FMs) for your use case:

  • Llama 3 8B is ideal for limited computational power and resources, and edge devices. The model excels at text summarization, text classification, sentiment analysis, and language translation.
  • Llama 3 70B is ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. The model excels at text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.

Meta is also currently training additional Llama 3 models over 400B parameters in size. These 400B models will have new capabilities, including multimodality, multiple languages support, and a much longer context window. When released, these models will be ideal for content creation, conversational AI, language understanding, research and development (R&D), and enterprise applications.

Llama 3 models in action
If you are new to using Meta models, go to the Amazon Bedrock console and choose Model access on the bottom left pane. To access the latest Llama 3 models from Meta, request access separately for Llama 3 8B Instruct or Llama 3 70B Instruct.

To test the Meta Llama 3 models in the Amazon Bedrock console, choose Text or Chat under Playgrounds in the left menu pane. Then choose Select model and select Meta as the category and Llama 8B Instruct or Llama 3 70B Instruct as the model.

By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. You can use model IDs such as meta.llama3-8b-instruct-v1 or meta.llama3-70b-instruct-v1.

Here is a sample of the AWS CLI command:

$ aws bedrock-runtime invoke-model \
  --model-id meta.llama3-8b-instruct-v1:0 \
  --body "{\"prompt\":\"Simply put, the theory of relativity states that\\n the laws of physics are the same everywhere in the universe, and that the passage of time and the length of objects can vary depending on their speed and position in a gravitational field \",\"max_gen_len\":512,\"temperature\":0.5,\"top_p\":0.9}" \
  --cli-binary-format raw-in-base64-out \
  --region us-east-1 \
  invoke-model-output.txt

You can use code examples for Amazon Bedrock using AWS SDKs to build your applications with various programming languages. The following Python code examples show how to invoke the Llama 3 Chat model in Amazon Bedrock for text generation.

def invoke_llama3(self, prompt):
        try:
            body = {
                "prompt": prompt,
                "temperature": 0.5,
                "top_p": 0.9,
                "max_gen_len": 512,
            }

            response = self.bedrock_runtime_client.invoke_model(
                modelId="meta.llama3-8b-instruct-v1:0", body=json.dumps(body)
            )

            response_body = json.loads(response["body"].read())
            completion = response_body["generation"]

            return completion

        except ClientError:
            logger.error("Couldn't invoke Llama 3")
            raise

You can use these Llama 3 models for a variety of tasks, such as question answering, language translation, sentiment analysis, and more. Here is an example of Llama 3 models with few-shot in-context learning where we provide training samples available to the model.

Input: Translate English to French:
sea otter => loutre de mer
peppermint => menthe poivrée
plush girafe => girafe peluche
cheese =>
Output:  fromage
panda => panda
penguin => pingouin
polar bear => ours polaire
puffin => macareux
sloth => paresseux
walrus => morse
whale => baleine
zebra => z\u00e8bre

Translate English to German:
sea otter => Seeotter

You can also use Llama 3 instruct models optimized for dialogue use cases. The input to the instruct model endpoints is the previous history between the chat assistant and the user. So, you can ask questions contextual to the conversation that has happened so far and provide the system configuration, such as personas, which define the chat assistant’s behavior in the following format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>
[SYSTEM_TEXT]<|eot_id|><|start_header_id|>user<|end_header_id|>
[USER_TEXT]<|eot_id|><|start_header_id|>assistant<|end_header_id|>

In the following example, the user has a conversation with the assistant about tourist sites in Paris. Then, the user inquires about the first option recommended by the chat assistant.

Input: <|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n
I am going to Paris, what should I see?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n
1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n
2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n
3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\n
These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n
What is so great about #1?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n
Output: The Eiffel Tower! It's an iconic symbol of Paris and one of the most recognizable landmarks in the world. Here are some reasons why it's so great:\n\n
1. **Engineering marvel**: When it was built for the 1889 World's Fair, the Eiffel Tower was a groundbreaking feat of engineering. It was the tallest structure in the world at the time, standing at 324 meters (1,063 feet).\n
2. **Panoramic views**: The Eiffel Tower offers breathtaking 360-degree views of the city from its observation decks. You can see many of Paris' famous landmarks, such as the Seine River, the Louvre, and Montmartre.\n
3. **Romantic atmosphere**: The Eiffel Tower is often associated with romance and is a popular spot for proposals, weddings, and romantic dinners. The tower's iron latticework and twinkling lights at night add to its romantic ambiance.\n
4. **Historical significance**: The Eiffel Tower has played a significant role in French history, serving as a communication tower during World War I and a symbol of French culture and engineering.\n
5. **Accessibility**: The Eiffel Tower is easily accessible by public transportation or on foot, and there are various ways to experience it, including taking the stairs, elevator, or a guided tour.\n
6. **Nighttime spectacle**: The Eiffel Tower is even more magical at night, when it's illuminated with thousands of twinkling lights. The light show, known as the \"Eiffel Tower Sparkle,\" takes place every hour on the hour from dusk to midnight.\n
7. **Iconic status**: The Eiffel Tower has been featured in countless films, literature, and art, making it an instantly recognizable symbol of Paris and France.\n\n
Overall, the Eiffel Tower is a must-visit attraction in Paris, offering a unique combination of history, engineering, romance, and stunning views.

To learn more about the new prompt template and special tokens of Llama 3, check out Meta’s model cards and prompt formats or Llama Recipes in the GitHub repository.

Now available
Meta’s Llama 3 models are available today in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) Regions. Check the full Region list for future updates. To learn more, check out the Llama in Amazon Bedrock product page and pricing page.

Give Llama 3 a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

Visit our community.aws site to find deep-dive technical content and to discover how our Builder communities are using Amazon Bedrock in their solutions.

Channy

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Microsoft Generative AI Hackathon: A Call to Innovate in Healthcare Technology

1 Share

The digital frontier is expanding, and with it, the possibilities for technological innovation in healthcare. Microsoft is inviting developers and tech enthusiasts to participate in the Microsoft Generative AI Hackathon.

This event promises to blend the artistry of multimodal experiences with the precision of modern technology, using tools from Microsoft Azure AI.

If you’re passionate about driving change and innovation in healthcare through technology, this hackathon is your stage.


What is the Microsoft Generative AI Hackathon?

Imagine combining sight, sound, touch, and perhaps even more senses in an app that aids doctors, nurses, and healthcare providers in their daily tasks. The Microsoft Generative AI Hackathon is not just a coding challenge; it’s an opportunity to craft solutions that are as multifaceted as the finest gourmet meal, where each line of code acts as an ingredient contributing to a larger, impactful technological solution.

 

Why Focus on Healthcare?

Healthcare is a field that benefits immensely from integrated technology solutions. By participating in this hackathon, you can create apps that support diagnostic processes, enhance patient interaction, and streamline healthcare services. The potential to impact this sector with innovative solutions using Azure AI is immense.

 

Event Details and How to Participate

Theme: Imagine, Integrate, Innovate: Build with Azure AI to revolutionize multimodal experiences.

Challenge: Developers are tasked with creating a multimodal app that uses at least two of the following modes: image, video/motion, voice/audio, or text. Utilizing Microsoft’s Responsible AI tools and principles will ensure that these innovations not only lead in technology but also in ethical AI usage.

Participation: To get started, review the resources here, join the Microsoft Azure AI Community Discord, and the Devpost Discord for collaboration opportunities.

 

Submission for Requirements:

💼 Provide a URL for your working app and testing instructions for the judges to review and test your project.

🧑‍💻 Provide a URL to your GitHub code repository. The repository must be public and have an open-source license from one of the following:

• MIT – https://opensource.org/licenses/MIT
• Apache 2.0 – https://opensource.org/licenses/Apache-2.0
• 3-Clause BSD – https://opensource.org/licenses/BSD-3-Clause

🎥 Include a 3 minute video that demonstrates your project in action (hosted on YouTube, Vimeo, or Facebook Video and made public). The video should include an explanation of how your project uses Azure AI while highlighting the impact and category chosen.

📩 Your complete submission form on Devpost before the final deadline.

 

Prizes to Inspire Your Innovation

The Microsoft Generative AI Hackathon is not just about recognition but also substantial rewards:

$32,400 in prizes

⭐ First Place
* $8,000 USD
* Azure credits equivalent to $1,000 (total, not per teammate)
* Featured in blog post
*Special recognition at Microsoft Build
* Meeting with Microsoft AI team
* Up to 3 teammates will receive additional cash prizes of $2,000 each to support optional travel to Microsoft Build 2024 flagship conference in Seattle, WA

⭐ Second Place
* $4,000 USD
* Azure credits equivalent to $1,000 (total, not per teammate)
* Featured in blog post
* Special recognition at Microsoft Build
* Meeting with Microsoft AI team

⭐ Third Place
$3,000 USD
Azure credits equivalent to $1,000 (total, not per teammate)
Featured in blog post
Special recognition at Microsoft Build
Meeting with Microsoft AI team

⭐ Honorable Mention (5)
* Featured in blog post
* Swag valued at $100 (1 team of up to 5 people)

⭐ Best Use of VS Code Extension Bonus Prize (5)
* Featured in blog post
* Swag valued at $100 (1 team of up to 5 people)

⭐ Eligible Submitter Bonus Prize
* Digital badge for the first 100 eligible submissions

 

How Can Your Innovation Impact Healthcare?

Multimodal AI can transform healthcare delivery. For example, an app combining voice and video can assist in remote patient monitoring, while integrating text and image modes can enhance electronic health records (EHRs). The possibilities are limitless, and your solution could be at the forefront of this transformation.

 

Judging Criteria:

Technological Implementation
Does the project demonstrate quality software development? Did the developers go above and beyond by using Azure AI features?

Potential Impact
How big of an impact could the project have on the AI community? How big of an impact could it have beyond the target community?

Quality of the Idea Effective
How creative and unique is the project? Does the concept exist already? If so, how much does the project improve on it?

Multimodal Functionality
Does the project make interesting use of the required multimodal functionality? How well do 2 or more multimodal features (image, video/motion, voice/audio, text) add value to the overall project?

Bonus VS Code Extensions
Does the project use VS Code Extensions? How well do they add value to the overall project?


This hackathon represents a unique opportunity to showcase your skills, make an impact on the healthcare industry, and possibly revolutionize aspects of healthcare delivery. It’s more than a competition; it’s a chance to contribute to a future where technology and healthcare intersect more seamlessly than ever.

Join Devpost to register for Microsoft Generative AI Hackathon

➕ Plus find and participate in other great hackathons to improve your skills and win prizes. Join here! 👋

The post Microsoft Generative AI Hackathon: A Call to Innovate in Healthcare Technology appeared first on Paul Joseph Swider.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories