By Ningjing Gao, Senior Group Program Manager, Security PMO
The Adobe Security Program Management Office (PMO) oversees a diverse portfolio of over 20 strategic technical programs annually, each program meticulously defined with its own distinct scope, clear deliverables, and completion deadlines. Given the complexity and scale of security initiatives, which often involve many interrelated tasks and dependencies, thereby making them susceptible for delays and unforeseen changes, technical program managers (TPMs) must monitor each program’s health and success closely during the program lifecycle.
In this blog, I will share how the Adobe Security PMO team measures the effectiveness of its security programs throughout the various program phases — program initiation, planning, execution, and closing — and provide insights on how you can effectively evaluate your own programs each step of the way to amplify their impact within your organization.
Program Initiation: Defining Measurements of Success
Defining what success looks like starts from the very beginning of program initiation. At Adobe, the TPM assigned to a specific program is responsible for determining its initial scope, which includes a distinct set of deliverables, timeline and milestones, and a clear measurement of success. Success typically consists of high-level program KPIs later refined during the program planning phase to measure program performance. Both the program sponsors and key stakeholders provide input into these KPIs.
Using our past endpoint detection and response (EDR) deployment program as an example, the assigned TPM defined the program’s measurement of success as reaching a 99% EDR agent deployment rate and enabling proactive protection policy by a specific date based on the program executive sponsors’ input.
In addition to the measurements of success, the TPM should also determine the programminimum viable product (MVP), or the absolute minimum delivery required from the program. The MVP is a lower bar than the measurement of success but is the baseline that the team absolutely cannot miss. MVP is important to establish as a fallback in case the measurement of success cannot be reached due to unforeseen circumstances, such as technical or other difficulties.
Program Planning: Setting SMART KPIs
When a program moves into a detailed planning phase, the PMO defines a set of “SMART” KPIs to measure the exact program performance.
SMART goals cover the following questions:
Specific: What exactly does the program want to achieve?
Measurable: How will you identify whether the program has achieved your goal?
Attainable: Is the program goal realistically achievable?
Relevant: Does it align with where the program and stakeholders want to be?
Time-Bound: What are the key milestones and deadlines that need to be met?
Going back to our EDR program as the example, one of the SMART goals our TPM established was to “reduce the number of hosts with old EDR agent versions to below 1% by the end of May 2024.” Another SMART goal for the program was to “reduce the number of hosts with missing or incorrect EDR tags down to 1% by the end of February 2024.” These goals are SMART because they are specific about what the program wants to achieve, measurable through KPI numbers, realistically attainable for our teams, relevant to our program’s stakeholders, and time-bound by a clear deadline in mind.
Program Execution: Tracking and Reporting KPIs
At Adobe, we’ve developed automated dashboards to support KPI tracking efforts during the program execution phase. Automated dashboards help reduce the manual effort of data collection and provide timelier updates. For example, one dashboard can show the monthly count of created tickets versus resolved vulnerability finding tickets, which tells us both accurately and in real time whether a given remediation is effective. In any given month, we should be able to look at the dashboard and see whether the remediation speed is keeping up with the rate of newly created tickets. Since the ticket and vulnerability counts change hourly, automated dashboards can be a lifesaver for capturing accurate data.
The TPM then aggregates the program KPIs and dashboard outputs and shares them through regular program status reports that are sent to the program stakeholders for visibility and transparency. With these dashboards and status reports, program stakeholders can course correct more rapidly and make more impactful data-driven decisions.
Program Closing: Soliciting Stakeholder Feedback & Sign-Off
Finally, the stakeholders officially sign off on a program during the closing phase, validating its ultimate success. At Adobe, we developed a clear program sign-off process to evaluate the program and get final feedback:
Step 1: Identify and list all sign-off parties and their responsible areas
Step 2: TPM creates a central sign-off document and informs all stakeholders of the sign-off deadlines
Step 3: Each stakeholder enters a decision by the due date. If they are unable to provide sign-off, they must provide a specific reason and action required for them to sign off
Step 4: TPM coordinates closure of the required actions, then requests sign-off again
Step 5: TPM provides a final sign-off summary to all involved parties and program stakeholders as part of the program closure
The sign-off process eliminates any ambiguity about the impact or results of the program and brings attention to what more could be done or improved to reach the program’s goals, if anything.
Measuring Individual TPM Success
Measuring a program’s success is important, but measuring how well the assigned TPM delivers on the program is equally so. We want to know whether the individual TPM is successful in the eyes of the stakeholders, as this is a key factor to the program’s overall success.
To evaluate our TPMs, we send out regular program stakeholder surveys throughout the program cycle to get the stakeholder’s assessment of the program management in terms of scope, timeline, deliverables, risk, communication, and budget management. The survey includes some questions with predefined scales as well as open-ended questions asking for suggestions on improvement.
Below is an example of our survey using predefined scales:
Final Takeaways
To wrap up, here are the five (5) key considerations to help you measure your security program’s success:
In the program initiation phase, clearly define your measurement of success and the MVP.
During the program’s detailed planning phase, define a set of SMART KPIs that should be tracked.
During the program execution phase, leverage automated dashboards to report on the progress of your KPIs, and embed them as a part of your regular program status communications to foster transparency.
At the program closing phase, gather feedback from stakeholders and obtain final sign-offs to reduce any ambiguity.
Survey your program stakeholders regularly to get a pulse check on the program’s progress.
Measuring security program success requires a delicate balance of art and science. By integrating these five key considerations and lessons learned, you can be more confident in enhancing your programs’ effectiveness year after year.
What’s on Your Mind? We Want to Hear from You!
Your opinion matters to us. Help shape the future of our blog by sharing your ideas and preferences. Click the link below to take a quick survey and tell us what you’d like to read about next.
Editors: Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
Announcing the release of Kubernetes v1.30: Uwubernetes, the cutest release!
Similar to previous releases, the release of Kubernetes v1.30 introduces new stable, beta, and alpha
features. The consistent delivery of top-notch releases underscores the strength of our development
cycle and the vibrant support from our community.
This release consists of 45 enhancements. Of those enhancements, 17 have graduated to Stable, 18 are
entering Beta, and 10 have graduated to Alpha.
Release theme and logo
Kubernetes v1.30: Uwubernetes
Kubernetes v1.30 makes your clusters cuter!
Kubernetes is built and released by thousands of people from all over the world and all walks of
life. Most contributors are not being paid to do this; we build it for fun, to solve a problem, to
learn something, or for the simple love of the community. Many of us found our homes, our friends,
and our careers here. The Release Team is honored to be a part of the continued growth of
Kubernetes.
For the people who built it, for the people who release it, and for the furries who keep all of our
clusters online, we present to you Kubernetes v1.30: Uwubernetes, the cutest release to date. The
name is a portmanteau of “kubernetes” and “UwU,” an emoticon used to indicate happiness or cuteness.
We’ve found joy here, but we’ve also brought joy from our outside lives that helps to make this
community as weird and wonderful and welcoming as it is. We’re so happy to share our work with you.
UwU ♥️
Improvements that graduated to stable in Kubernetes v1.30
This is a selection of some of the improvements that are now stable following the v1.30 release.
Robust VolumeManager reconstruction after kubelet restart (SIG Storage)
This is a volume manager refactoring that allows the kubelet to populate additional information
about how existing volumes are mounted during the kubelet startup. In general, this makes volume
cleanup after kubelet restart or machine reboot more robust.
This does not bring any changes for user or cluster administrators. We used the feature process and
feature gate NewVolumeManagerReconstruction to be able to fall back to the previous behavior in
case something goes wrong. Now that the feature is stable, the feature gate is locked and cannot be
disabled.
Prevent unauthorized volume mode conversion during volume restore (SIG Storage)
For Kubernetes 1.30, the control plane always prevents unauthorized changes to volume modes when
restoring a snapshot into a PersistentVolume. As a cluster administrator, you'll need to grant
permissions to the appropriate identity principals (for example: ServiceAccounts representing a
storage integration) if you need to allow that kind of change at restore time.
Warning: Action required before upgrading. The prevent-volume-mode-conversion feature flag is enabled by
default in the external-provisioner v4.0.0 and external-snapshotter v7.0.0. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless you perform the steps described in
the the "Urgent Upgrade Notes" sections for the external-provisioner
4.0.0 and the
external-snapshotter
v7.0.0.
Pod scheduling readiness graduates to stable this release, after being promoted to beta in
Kubernetes v1.27.
This now-stable feature lets Kubernetes avoid trying to schedule a Pod that has been defined, when
the cluster doesn't yet have the resources provisioned to allow actually binding that Pod to a node.
That's not the only use case; the custom control on whether a Pod can be allowed to schedule also
lets you implement quota mechanisms, security controls, and more.
Crucially, marking these Pods as exempt from scheduling cuts the work that the scheduler would
otherwise do, churning through Pods that can't or won't schedule onto the nodes your cluster
currently has. If you have cluster
autoscaling active, using scheduling
gates doesn't just cut the load on the scheduler, it can also save money. Without scheduling gates,
the autoscaler might otherwise launch a node that doesn't need to be started.
In Kubernetes v1.30, by specifying (or removing) a Pod's .spec.schedulingGates, you can control
when a Pod is ready to be considered for scheduling. This is a stable feature and is now formally
part of the Kubernetes API definition for Pod.
The minDomains parameter for PodTopologySpread constraints graduates to stable this release, which
allows you to define the minimum number of domains. This feature is designed to be used with Cluster
Autoscaler.
If you previously attempted use and there weren't enough domains already present, Pods would be
marked as unschedulable. The Cluster Autoscaler would then provision node(s) in new domain(s), and
you'd eventually get Pods spreading over enough domains.
The Kubernetes repo now uses Go workspaces. This should not impact end users at all, but does have a
impact for developers of downstream projects. Switching to workspaces caused some breaking changes
in the flags to the various k8s.io/code-generator
tools. Downstream consumers should look at
staging/src/k8s.io/code-generator/kube_codegen.sh
to see the changes.
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching
logs of services running on the node. To use the feature, ensure that the NodeLogQuery feature
gate is enabled for that node, and that the kubelet configuration options enableSystemLogHandler
and enableSystemLogQuery are both set to true.
Following the v1.30 release, this is now beta (you still need to enable the feature to use it,
though).
On Linux the assumption is that service logs are available via journald. On Windows the assumption
is that service logs are available in the application log provider. Logs are also available by
reading files within /var/log/ (Linux) or C:\var\log\ (Windows). For more information, see the
log query documentation.
You need to enable the CRDValidationRatchetingfeature
gate to use this behavior, which then
applies to all CustomResourceDefinitions in your cluster.
Provided you enabled the feature gate, Kubernetes implements validation racheting for
CustomResourceDefinitions. The API server is willing to accept updates to resources that are not valid
after the update, provided that each part of the resource that failed to validate was not changed by
the update operation. In other words, any invalid part of the resource that remains invalid must
have already been wrong. You cannot use this mechanism to update a valid resource so that it becomes
invalid.
This feature allows authors of CRDs to confidently add new validations to the OpenAPIV3 schema under
certain conditions. Users can update to the new schema safely without bumping the version of the
object or breaking workflows.
Contextual Logging advances to beta in this release, empowering developers and operators to inject
customizable, correlatable contextual details like service names and transaction IDs into logs
through WithValues and WithName. This enhancement simplifies the correlation and analysis of log
data across distributed systems, significantly improving the efficiency of troubleshooting efforts.
By offering a clearer insight into the workings of your Kubernetes environments, Contextual Logging
ensures that operational challenges are more manageable, marking a notable step forward in
Kubernetes observability.
Make Kubernetes aware of the LoadBalancer behaviour (SIG Network)
The LoadBalancerIPMode feature gate is now beta and is now enabled by default. This feature allows
you to set the .status.loadBalancer.ingress.ipMode for a Service with type set to
LoadBalancer. The .status.loadBalancer.ingress.ipMode specifies how the load-balancer IP
behaves. It may be specified only when the .status.loadBalancer.ingress.ip field is also
specified. See more details about specifying IPMode of load balancer
status.
New alpha features
Speed up recursive SELinux label change (SIG Storage)
From the v1.27 release, Kubernetes already included an optimization that sets SELinux labels on the
contents of volumes, using only constant time. Kubernetes achieves that speed up using a mount
option. The slower legacy behavior requires the container runtime to recursively walk through the
whole volumes and apply SELinux labelling individually to each file and directory; this is
especially noticable for volumes with large amount of files and directories.
Kubernetes 1.27 graduated this feature as beta, but limited it to ReadWriteOncePod volumes. The
corresponding feature gate is SELinuxMountReadWriteOncePod. It's still enabled by default and
remains beta in 1.30.
Kubernetes 1.30 extends support for SELinux mount option to all volumes as alpha, with a
separate feature gate: SELinuxMount. This feature gate introduces a behavioral change when
multiple Pods with different SELinux labels share the same volume. See
KEP
for details.
We strongly encourage users that run Kubernetes with SELinux enabled to test this feature and
provide any feedback on the KEP issue.
Feature gate
Stage in v1.30
Behavior change
SELinuxMountReadWriteOncePod
Beta
No
SELinuxMount
Alpha
Yes
Both feature gates SELinuxMountReadWriteOncePod and SELinuxMount must be enabled to test this
feature on all volumes.
This feature has no effect on Windows nodes or on Linux nodes without SELinux support.
Introducing Recursive Read-Only (RRO) Mounts in alpha this release, you'll find a new layer of
security for your data. This feature lets you set volumes and their submounts as read-only,
preventing accidental modifications. Imagine deploying a critical application where data integrity
is key—RRO Mounts ensure that your data stays untouched, reinforcing your cluster's security with an
extra safeguard. This is especially crucial in tightly controlled environments, where even the
slightest change can have significant implications.
From Kubernetes v1.30, indexed Jobs support .spec.successPolicy to define when a Job can be
declared succeeded based on succeeded Pods. This allows you to define two types of criteria:
succeededIndexes indicates that the Job can be declared succeeded when these indexes succeeded,
even if other indexes failed.
succeededCount indicates that the Job can be declared succeeded when the number of succeeded
Indexes reaches this criterion.
After the Job meets the success policy, the Job controller terminates the lingering Pods.
Kubernetes v1.30 introduces the spec.trafficDistribution field within a Kubernetes Service as
alpha. This allows you to express preferences for how traffic should be routed to Service endpoints.
While traffic policies focus on strict
semantic guarantees, traffic distribution allows you to express preferences (such as routing to
topologically closer endpoints). This can help optimize for performance, cost, or reliability. You
can use this field by enabling the ServiceTrafficDistribution feature gate for your cluster and
all of its nodes. In Kubernetes v1.30, the following field value is supported:
PreferClose: Indicates a preference for routing traffic to endpoints that are topologically
proximate to the client. The interpretation of "topologically proximate" may vary across
implementations and could encompass endpoints within the same node, rack, zone, or even region.
Setting this value gives implementations permission to make different tradeoffs, for example
optimizing for proximity rather than equal distribution of load. You should not set this value if
such tradeoffs are not acceptable.
If the field is not set, the implementation (like kube-proxy) will apply its default routing
strategy.
Graduations, deprecations and removals for Kubernetes v1.30
Graduated to stable
This lists all the features that graduated to stable (also known as general availability). For a
full list of updates including new features and graduations from alpha to beta, see the release
notes.
This release includes a total of 17 enhancements promoted to Stable:
Removed the SecurityContextDeny admission plugin, deprecated since v1.27
(SIG Auth, SIG Security, and SIG Testing)
With the removal of the SecurityContextDeny admission plugin, the Pod Security Admission plugin,
available since v1.25, is recommended instead.
Release notes
Check out the full details of the Kubernetes 1.30 release in our release
notes.
Availability
Kubernetes 1.30 is available for download on
GitHub. To get started with
Kubernetes, check out these interactive tutorials or run
local Kubernetes clusters using minikube. You can also easily
install 1.30 using kubeadm.
Release team
Kubernetes is only possible with the support, commitment, and hard work of its community. Each
release team is made up of dedicated community volunteers who work together to build the many pieces
that make up the Kubernetes releases you rely on. This requires the specialized skills of people
from all corners of our community, from the code itself to its documentation and project management.
We would like to thank the entire release team
for the hours spent hard at work to deliver the Kubernetes v1.30 release to our community. The
Release Team's membership ranges from first-time shadows to returning team leads with experience
forged over several release cycles. A very special thanks goes out our release lead, Kat Cosgrove,
for supporting us through a successful release cycle, advocating for us, making sure that we could
all contribute in the best way possible, and challenging us to improve the release process.
Project velocity
The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity
of Kubernetes and various sub-projects. This includes everything from individual contributions to
the number of companies that are contributing and is an illustration of the depth and breadth of
effort that goes into evolving this ecosystem.
In the v1.30 release cycle, which ran for 14 weeks (January 8 to April 17), we saw contributions
from 863 companies and 1391 individuals.
Event update
KubeCon + CloudNativeCon China 2024 will take place in Hong Kong, from 21 – 23 August 2024! You
can find more information about the conference and registration on the event
site.
KubeCon + CloudNativeCon North America 2024 will take place in Salt Lake City, Utah, The United
States of America, from 12 – 15 November 2024! You can find more information about the conference
and registration on the eventsite.
Upcoming release webinar
Join members of the Kubernetes v1.30 release team on Thursday, May 23rd, 2024, at 9 A.M. PT to learn
about the major features of this release, as well as deprecations and removals to help plan for
upgrades. For more information and registration, visit the event
page
on the CNCF Online Programs site.
Join members of the Kubernetes v1.30 release team on DATE AND TIME TBA to learn about the major
features of this release, as well as deprecations and removals to help plan for upgrades. For more
information and registration, visit the event page on the CNCF Online Programs site.
Get involved
The simplest way to get involved
with Kubernetes is by joining one of the many Special Interest
Groups (SIGs) that align with your
interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at
our weekly community meeting,
and through the channels below. Thank you for your continued feedback and support.
As repairability becomes increasingly important, Surface is dedicated to making it easier to repair our devices. Four years after first integrating modular components into Surface Laptop 3, we've continued to innovate to help organizations get the most from their devices. Ultimately, the repairability of Surface devices provides you with more control, more options and better value for continued worker productivity, and security.
By prioritizing easy maintenance and repair, we empower your organization to manage device upkeep on your own terms, helping you get the most out of your investment without sacrificing performance or security.
Our latest devices -- Surface Pro 10 for Business and Surface Laptop 6 for Business -- incorporate new features to simplify the process of bringing devices back online. Let’s take a look at some of these new experiences.
Wayfinding methodology
Repairability markings and a wayfinding repair methodology are the latest innovations found in Surface Pro 10 and Surface Laptop 6 to simplify repair for our customers.
This methodology can significantly reduce mistakes by using repair markings and QR codes1 to navigate device repair. Surface Pro 10 includes repair markings to help technicians find the right tools and track the number of screws per component.
Both Surface Pro 10 and Surface Laptop 6 come with QR codes that link directly to the Microsoft Download Center where we host all available service guides. This helps technicians stay on task and minimize the amount of searching needed to complete a repair.
More replacement parts
We’re designing products and finding innovative solutions to help make it easier to repair devices. We’re also listening to your feedback by integrating more replacement components2 into Surface Laptop 6 and Surface Pro 10.
Technically inclined individuals with the knowledge, skills, and required tools can perform self-serve repairs on eligible Surface devices by following the applicable Surface Service Guide or article.
Once you’ve identified the parts needed for repair by referencing the appropriate Service Guide, you can purchase the replacement components through your Microsoft device reseller. 3 There are no certifications required to repair or service a Surface device.
Microsoft is partnering with ifixit.com to offer complete tool kits to repair electronics. Use iFixit's everyday precision tool kit or essential electronics tool kit to repair your devices.
Improved device security
Replacement components also provide another layer of customer control. With Microsoft’s Solid-State Drive (SSD) Retention4 organizations can keep the SSD from their Surface devices during a service event. This retention helps protect sensitive business information by reducing exposure during repairs.
Surface repair resources
Surface provides several resources to assist with device repair and the instructions to learn the steps involved in repairing a Surface device.
2. Replacement components have a 90-day limited warranty unless the accompanying written warranty gives a longer time. Replacement components may be new or refurbished. Replacement components are currently only available for purchase separately, Microsoft replacement components are intended for out-of-warranty, self-repair. These repairs should be performed by individuals with the knowledge and technical skill to perform complex electronic repairs. It is essential to follow the instructions in the applicable Microsoft Service Guide or article. Microsoft does not provide additional technical support for self-repair.
3. Availability of replacement components and service options may vary by product, by market, and overtime. See Microsoft Service Guides at Download Surface Service Guides from Official Microsoft Download Center.
4. Drive (SSD) Retention permits customers the option to retain their removable SSD during service events at no additional charge. Drive (SSD) Retention is only available on Microsoft Surface devices, in which the SSD is marketed as removable per the technical specifications on the product’s description page.
Can speech become part of your development workflow? Carl and Richard talk to Karl Geitz about his use of NaturallySpeaking to create software in Visual Studio. Karl talks about using voice to write better, longer comments in his code and also helps to navigate the features of Visual Studio itself. The effort started when dealing with Repetitive Stress Injury but has now evolved into his most productive approach to coding - one hand on the mouse, the other on function keys, and voice instead of typing!
If I made up a theme for today’s reading list, I think it’s “focus.” You’ll find a few pieces that are about focusing your automation efforts, AI usage, and even your time.
[article] 5 Well-Intentioned Behaviors That Can Hurt Your Team. It’s one thing to be intentionally terrible at something, but it’s worse when your seemingly best efforts have a negative impact. Here are things that many of us probably do, and should stop doing.
[survey] 2024 DORA survey. This is the tenth year of this widely-recognized report that explores what good software delivery practices look like. Participate in this year’s survey!
[blog] AI Trends Report 2024: AI’s Growing Role in Software Development. On the topic of surveys, the Docker team shared results from their own study. No surprisingly, developers are using AI tools for coding assistance, writing docs, writing tests, and troubleshooting.
[blog] Build Your GenAI Strategy On A Rock-Solid Foundation (Model). I believe this refers to a family of foundation models, not a specific version. They change too fast! Maybe you can’t bet on one model, but don’t accidentally over-complicate things by trying to use all of them.
[youtube-video] Colab 101: Your Ultimate Beginner’s Guide! Millions of folks use Colab every month to learn Python, run models, and do all sorts of notebook-y things. This is a very good video intro.