Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149581 stories
·
33 followers

Refactoring the heart of PowerToys from C++ to C#

1 Share

In the last few months I have been working on refactoring the heart of PowerToys from C++ to C#. This heart is the PowerToys runner.

What is the PowerToys runner?

The PowerToys runner is the main part of PowerToys. It is the powertoys.exe executable which is responsible for the following things:

  • Showing the tray icon of PowerToys
  • Starting the different PowerToys modules based on the user settings and handling their processes
  • Piping different commands from the settings to the different modules

Currently most of these functions are handled by the different module interfaces. Each PowerToys module has a module interface which is just one C++ project that exports a DLL

An example of a PowerToys module interface (shortened):

[...]

namespace NonLocalizable
{
    const wchar_t ModulePath[] = L"PowerToys.CropAndLock.exe";
}

namespace
{
    const wchar_t JSON_KEY_PROPERTIES[] = L"properties";
    const wchar_t JSON_KEY_WIN[] = L"win";
    const wchar_t JSON_KEY_ALT[] = L"alt";
    const wchar_t JSON_KEY_CTRL[] = L"ctrl";
    const wchar_t JSON_KEY_SHIFT[] = L"shift";
    const wchar_t JSON_KEY_CODE[] = L"code";
    const wchar_t JSON_KEY_REPARENT_HOTKEY[] = L"reparent-hotkey";
    const wchar_t JSON_KEY_THUMBNAIL_HOTKEY[] = L"thumbnail-hotkey";
    const wchar_t JSON_KEY_SCREENSHOT_HOTKEY[] = L"screenshot-hotkey";
    const wchar_t JSON_KEY_VALUE[] = L"value";
}

BOOL APIENTRY DllMain( HMODULE /*hModule*/,
                       DWORD  ul_reason_for_call,
                       LPVOID /*lpReserved*/
                     )
{
    switch (ul_reason_for_call)
    {
    case DLL_PROCESS_ATTACH:
        Trace::CropAndLock::RegisterProvider();
        break;
    case DLL_THREAD_ATTACH:
    case DLL_THREAD_DETACH:
        break;
    case DLL_PROCESS_DETACH:
        Trace::CropAndLock::UnregisterProvider();
        break;
    }
    return TRUE;

}

class CropAndLockModuleInterface : public PowertoyModuleIface
{
public:
    // Return the localized display name of the powertoy
    virtual PCWSTR get_name() override
    {
        return app_name.c_str();
    }

    // Return the non localized key of the powertoy, this will be cached by the runner
    virtual const wchar_t* get_key() override
    {
        return app_key.c_str();
    }

    // Return the configured status for the gpo policy for the module
    virtual powertoys_gpo::gpo_rule_configured_t gpo_policy_enabled_configuration() override
    {
        return powertoys_gpo::getConfiguredCropAndLockEnabledValue();
    }

    // Return JSON with the configuration options.
    // These are the settings shown on the settings page along with their current values.
    virtual bool get_config(wchar_t* buffer, int* buffer_size) override
    {
        HINSTANCE hinstance = reinterpret_cast<HINSTANCE>(&__ImageBase);

        // Create a Settings object.
        PowerToysSettings::Settings settings(hinstance, get_name());

        return settings.serialize_to_buffer(buffer, buffer_size);
    }

    // Passes JSON with the configuration settings for the powertoy.
    // This is called when the user hits Save on the settings page.
    virtual void set_config(const wchar_t* config) override
    {
        try
        {
            // Parse the input JSON string.
            PowerToysSettings::PowerToyValues values =
                PowerToysSettings::PowerToyValues::from_json_string(config, get_key());

            parse_hotkey(values);

            values.save_to_settings_file();
        }
        catch (std::exception&)
        {
            // Improper JSON.
        }
    }

    virtual bool on_hotkey(size_t hotkeyId) override
    {
        if (m_enabled)
        {
            // [...]
        }

        return false;
    }

    virtual size_t get_hotkeys(Hotkey* hotkeys, size_t buffer_size) override
    {
        if (hotkeys && buffer_size >= 3)
        {
            hotkeys[0] = m_reparent_hotkey;
            hotkeys[1] = m_thumbnail_hotkey;
            hotkeys[2] = m_screenshot_hotkey;
        }
        return 3;
    }

    // Enable the powertoy
    virtual void enable()
    {
        Logger::info("CropAndLock enabling");
        Enable();
    }

    // Disable the powertoy
    virtual void disable()
    {
        Logger::info("CropAndLock disabling");
        Disable(true);
    }

    // Returns if the powertoy is enabled
    virtual bool is_enabled() override
    {
        return m_enabled;
    }

    // Destroy the powertoy and free memory
    virtual void destroy() override
    {
        Disable(false);
        delete this;
    }

    virtual void send_settings_telemetry() override
    {
        Logger::info("Send settings telemetry");
        Trace::CropAndLock::SettingsTelemetry(m_reparent_hotkey, m_thumbnail_hotkey, m_screenshot_hotkey);
    }

    CropAndLockModuleInterface()
    {
        app_name = L"CropAndLock";
        app_key = NonLocalizable::ModuleKey;
        LoggerHelpers::init_logger(app_key, L"ModuleInterface", LogSettings::cropAndLockLoggerName);

        m_reparent_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_REPARENT_EVENT);
        m_thumbnail_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_THUMBNAIL_EVENT);
        m_screenshot_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_SCREENSHOT_EVENT);
        m_exit_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_EXIT_EVENT);

        init_settings();
    }

private:
    void Enable()
    {
        m_enabled = true;

        // Log telemetry
        Trace::CropAndLock::Enable(true);

        // Pass the PID.
        unsigned long powertoys_pid = GetCurrentProcessId();
        std::wstring executable_args = L"";
        executable_args.append(std::to_wstring(powertoys_pid));
        
        ResetEvent(m_reparent_event_handle);
        ResetEvent(m_thumbnail_event_handle);
        ResetEvent(m_screenshot_event_handle);
        ResetEvent(m_exit_event_handle);

        SHELLEXECUTEINFOW sei{ sizeof(sei) };
        sei.fMask = { SEE_MASK_NOCLOSEPROCESS | SEE_MASK_FLAG_NO_UI };
        sei.lpFile = NonLocalizable::ModulePath;
        sei.nShow = SW_SHOWNORMAL;
        sei.lpParameters = executable_args.data();
        if (ShellExecuteExW(&sei) == false)
        {
            Logger::error(L"Failed to start CropAndLock");
            auto message = get_last_error_message(GetLastError());
            if (message.has_value())
            {
                Logger::error(message.value());
            }
        }
        else
        {
            m_hProcess = sei.hProcess;
        }

    }

    void Disable(bool const traceEvent)
    {
        m_enabled = false;

        // We can't just kill the process, since Crop and Lock might need to release any reparented windows first.
        SetEvent(m_exit_event_handle);

        ResetEvent(m_reparent_event_handle);
        ResetEvent(m_thumbnail_event_handle);
        ResetEvent(m_screenshot_event_handle);

        // Log telemetry
        if (traceEvent)
        {
            Trace::CropAndLock::Enable(false);
        }

        if (m_hProcess)
        {
            m_hProcess = nullptr;
        }

    }

    void parse_hotkey(PowerToysSettings::PowerToyValues& settings)
    {
        // [...]
    }

    bool is_process_running()
    {
        return WaitForSingleObject(m_hProcess, 0) == WAIT_TIMEOUT;
    }

    void init_settings()
    {
        try
        {
            // Load and parse the settings file for this PowerToy.
            PowerToysSettings::PowerToyValues settings =
                PowerToysSettings::PowerToyValues::load_from_settings_file(get_key());

            parse_hotkey(settings);
        }
        catch (std::exception&)
        {
            Logger::warn(L"An exception occurred while loading the settings file");
            // Error while loading from the settings file. Let default values stay as they are.
        }
    }

    std::wstring app_name;
    std::wstring app_key; //contains the non localized key of the powertoy

    bool m_enabled = false;
    HANDLE m_hProcess = nullptr;

    // TODO: actual default hotkey setting in line with other PowerToys.
    Hotkey m_reparent_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'R' };
    Hotkey m_thumbnail_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'T' };
    Hotkey m_screenshot_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'S' };

    HANDLE m_reparent_event_handle;
    HANDLE m_thumbnail_event_handle;
    HANDLE m_screenshot_event_handle;
    HANDLE m_exit_event_handle;

};

extern "C" __declspec(dllexport) PowertoyModuleIface* __cdecl powertoy_create()
{
    return new CropAndLockModuleInterface();
}

Here not visible are the different other files required to build this module interface into a DLL.

Why refactor the runner?

The current code base is very messy and hard to maintain. PowerToys is an incubation project, which means it constantly has to adapt to new requirements and changes. The current code base is not flexible enough to handle these changes and it is also very hard to add new features to it.

Furthermore many parts of the code are simply over-engineered or even unused.

One last reason is that PowerToys is a open source project and it is very hard for new contributors to understand the code base and to contribute to it. Refactoring the code base to C# will make it more accessible for new contributors and will also make it easier to maintain in the long run.

How does the new module interface look

In the new system each module will only consist of one class which implements the IPowerToyModule interface. This means a big reduction in projects that have to be built, as all the files are now in one project. This also means that the code compiles much faster and it is easier to debug.

Example of the same module interface as above but in C# (shortened):

// [...]
namespace RunnerV2.ModuleInterfaces
{
    internal sealed partial class CropAndLockModuleInterface : ProcessModuleAbstractClass, IPowerToysModule, IDisposable, IPowerToysModuleShortcutsProvider, IPowerToysModuleSettingsChangedSubscriber
    {
        public string Name => "CropAndLock";

        public bool Enabled => SettingsUtils.Default.GetSettings<GeneralSettings>().Enabled.CropAndLock;

        public GpoRuleConfigured GpoRuleConfigured => GPOWrapper.GetConfiguredCropAndLockEnabledValue();

        private EventWaitHandle _reparentEvent = new(false, EventResetMode.AutoReset, Constants.CropAndLockReparentEvent());
        private EventWaitHandle _thumbnailEvent = new(false, EventResetMode.AutoReset, Constants.CropAndLockThumbnailEvent());
        private EventWaitHandle _terminateEvent = new(false, EventResetMode.AutoReset, Constants.CropAndLockExitEvent());

        public void Disable()
        {
            _terminateEvent.Set();
        }

        public void Enable()
        {
            PopulateShortcuts();
        }

        public void PopulateShortcuts()
        {
            Shortcuts.Clear();
            var settings = SettingsUtils.Default.GetSettings<CropAndLockSettings>(Name);
            Shortcuts.Add((settings.Properties.ThumbnailHotkey.Value,  () => _thumbnailEvent.Set()));
            Shortcuts.Add((settings.Properties.ReparentHotkey.Value,  () => _reparentEvent.Set()));
        }

        public void OnSettingsChanged()
        {
            PopulateShortcuts();
        }

        public List<(HotkeySettings Hotkey, Action Action)> Shortcuts { get; } = [];

        public override string ProcessPath => "PowerToys.CropAndLock.exe";

        public override string ProcessName => "PowerToys.CropAndLock";

        public override ProcessLaunchOptions LaunchOptions => ProcessLaunchOptions.SingletonProcess | ProcessLaunchOptions.RunnerProcessIdAsFirstArgument;

        public void Dispose()
        {
            GC.SuppressFinalize(this);
            _reparentEvent.Dispose();
            _thumbnailEvent.Dispose();
            _terminateEvent.Dispose();
        }
    }
}

As you can see the code is much more concise and shorter. The new module interface also uses the new settings system which means that we can get rid of a lot of code related to handling the settings file.

Where am I at with the refactor?

The PR is mostly done, I am currently waiting for the .NET 10 upgrade to be done so I can update the runner to .NET 10 and then I will be able to finish the PR. I furthermore also upgraded some other stuff in the process like all File Explorer Add-Ons now use one project to export the DLL instead of each add-on having its own project.

Also rewritten from C++ to C# are the PowerToys action runner (used to elevate certain processes) and the PowerToys updater.

One last thing that has to be done is adding telemetry back.

You can check out the Pull Request for the refactor here.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Spec-Driven Development with AI Using SpecKit in Syncfusion Code Studio

1 Share

Spec‑Driven Development with AI Build Reliable Software with SpecKit in Syncfusion Code Studio

TL;DR: Stop vibe‑coding and the refactor loop that follows. Spec‑Driven Development replaces vague prompting with clear specs, structured plans, and task‑based execution that AI can follow. With SpecKit and Syncfusion Code Studio working together, you get a reliable, spec‑first workflow that keeps development on track. The result: fewer surprises, faster reviews, cleaner architecture, and predictable, scalable output across teams.

Why AI‑assisted coding fails without specs and how Spec‑Driven Development (SSD) fixes it

AI tools are becoming a big part of how we build software. They can write code quickly, but they still depend on clear instructions. If we don’t explain what we want in a simple, structured way, the AI may generate pieces of code that work individually but don’t fit together well. Over time, this leads to projects that are harder to maintain, update, or debug.

Many teams also start coding too early. When you begin writing code before understanding the requirements, the design becomes fixed too soon. Any change later becomes slow and expensive, and sometimes entire sections need to be rewritten. That’s why it’s important to understand what you’re building and why before coding anything.

This is where SpecKit and Syncfusion® Code Studio come in. SpecKit guides you step‑by‑step to write your project rules, describe your feature clearly, plan how it should work, break it into tasks, and then let AI implement it safely. Whereas Syncfusion Code Studio brings all of this together in one place using its built‑in AI chat, making the whole process easier.

In this blog, we’ll show you how Spec‑Driven Development with AI replaces “vibe coding” with a structured spec‑first workflow, and we’ll demonstrate it using a nested subtasks Todo feature as a practical example.

The trap of vibe coding

Many teams fall into what we call “vibe coding“, relying on conversational interactions with AI that feel productive at first, but it often leads to messy, inconsistent results.

Vibe coding happens when developers ask an AI to write code based only on quick, casual chat messages instead of clear instructions or rules. The AI guesses what you want based on the “vibe” of the conversation, not on an actual plan, specification, or project standards.

For example: A developer might ask, “Build me a todo app.” The AI will generate something, but it may ignore your coding standards, architecture, naming conventions, or design choices. The developer then spends hours fixing it, rewriting parts, or refactoring to match the real requirements.

The root problem

Chat conversations don’t persist as real engineering processes do. Each new message feels disconnected because there are no documented rules, no shared understanding, and no consistent context for the AI to follow. So even a great AI model will eventually drift away from what your team wants.

Due to this, developers may often have questions like:

  • “Why did it ignore our standards?”
  • “Why did it rewrite unrelated code?”
  • “Why did it forget what we agreed earlier?”

Without clear specs, plans, and rules, AI fills the gaps on its own. That’s why results often feel random or off‑track.

Vibe coding
Vibe coding

Why specs are the backbone of AI‑driven projects

Instead of jumping straight into code and writing documentation later, spec‑driven development turns the process around: you start by writing a specification.

A spec is simply a clear explanation of what you want to build and how it should behave. It becomes a shared agreement for the AI tools you use.

The spec acts as your source of truth. When AI generates, tests, or updates code, it follows this spec, so the results stay consistent and match what you want.

Benefits you’ll feel immediately:

  • Less guesswork, fewer surprises.
  • Cleaner code and simpler reviews.
  • Faster iteration because decisions are made once and reused everywhere.

A spec isn’t something you write once and forget. It becomes a living guide that helps you catch problems early and collaborate better.

Meet SpecKit: The toolkit that makes spec‑driven development practical

SpecKit is an open‑source toolkit that helps you build software by starting with a clear specification.

Instead of guessing in code, you first describe what you want and how it should behave. SpecKit then turns that into a plan and step‑by‑step tasks the AI can follow. This keeps the work predictable and easier to review.

Why this matters

You make your intent explicit first. SpecKit’s commands create files like spec.md, plan.md, and tasks.md that your AI assistant can follow, so results are consistent and on‑spec.

SpecKit working principle

Here’s how the process breaks down in the SpecKit:

  • Constitution: The AI agent creates a “project constitution” for you, a short set of core principles like architecture choices, coding style, and general standards.
  • Specify: You describe what you’re building from a user perspective, their needs, journeys, and desired outcomes, and the AI agent generates detailed specifications. This focuses on the why and what.
  • Clarify (Optional): If something in the specification is unclear, you can run a clarification step. This helps identify missing details, edge cases, or areas where the spec needs more explanation before planning begins.
  • Plan: The AI agent creates a comprehensive technical plan that respects your architecture.
  • Tasks: The agent takes the spec and plan. Then, it breaks them down into actual work. Each task solves a specific problem and can be implemented and tested independently. This is crucial because it gives the agent a way to validate its work and stay on track.
  • Analyze (optional): Before implementing, you can run an analysis step. This checks whether your spec, plan, and tasks match each other and finds contradictions, missing steps, or unclear requirements.
  • Implement: Your AI agent executes tasks one by one with focused changes. Because the spec defines what to build, the plan defines how, and tasks define exactly what to do, reviews become straightforward and manageable instead of overwhelming.

Here’s a simple visual that shows how all the SpecKit steps connect. This helps you understand the overall flow.

SpecKit workflow principle
SpecKit workflow principle

Syncfusion Code Studio: Your AI-assisted IDE

Syncfusion Code Studio is an AI-powered integrated development environment (IDE) with built-in assistance to support modern software development workflows. It’s like an extension of your development brain. With five powerful modes, it helps you brainstorm, refactor, or build features faster and smarter.

To learn more about Code Studio, visit our Introduction blog.

Prerequisites

Before we start, ensure you have:

How SpecKit works inside Syncfusion Code Studio: From setup to implementation

With the concepts clear, it’s time to get hands-on. Let’s see how to set up SpecKit and build a To-Do list feature demo app using Syncfusion Code Studio.

Step 1: Set up your project with the specified tool

Specify is a tool that sets up our project for SpecKit. If we run a single command, SpecKit automatically creates all the setup files and folders our project needs.

We can install the Specify tool directly using the following command:

uvx --from git+https://github.com/github/spec-kit.git specify init <PROJECT_NAME>

This command installs the tool and creates the initial project structure that is needed for spec-driven development.

  • AI assistant selection: During installation, you will be asked to choose an AI assistant. Select Copilot.
    • Why choose Copilot:
      • Right now, the installer doesn’t include a “Code Studio” choice.
      • Selecting Copilot ensures SpecKit can communicate with an AI assistant supported by Code Studio, so all SpecKit commands (specify/plan/tasks/analyze/implement) run without issues.
      • You can still use Code Studio as normal.

AI assistant selection

  • Choose script type: The installer will also ask you to pick a script type. Choose the one based on your operating system:
    • Select ‘sh’ for macOS
    • Select ‘ps’ for Windows (PowerShell)

Choose script type

  • Move the generated folders into the Code Studio structure: Once the installation finishes, SpecKit creates two folders inside your project’s ‘.github’ folder:
    • prompts
    • Agents

Code Studio expects these folders to be placed inside the .codestudio folder. To do so,

  • Create the .codestudio folder in your project root:
    mkdir .codestudio
  • Then, move both the prompts and agents folders into .codestudio:
    mv .github/prompts  .codestudio/
    mv .github/agents  .codestudio/

Why does this matters?

Code Studio only reads SpecKit configuration from the .codestudio file. If you skip this step, later commands may not work.

Step 2: Create the project constitution

The project constitution is a short set of project rules (architecture, coding style, testing, performance, workflow). It becomes the source of truth your AI will follow in every later step, so it doesn’t “freestyle” or drift from team standards.

The goal is to give the AI clear rules to follow before any specs, plans, tasks, or code are generated. This prevents off‑standard decisions later.

What you do:

  1. Open Syncfusion Code Studio.
  2. Then, open the chat panel in the Code Studio (Ctrl+Shift+I (Windows) or Cmd+Shift+I (macOS)).
  3. Once the chat panel is open, run this SpecKit constitution command:
    /speckit.constitution Create a clear project constitution with core principles that define how this project should be built. 
    
    Include standards for:
    
      • Architecture structure
      • Coding style and naming rules
      • Testing expectations
      • Accessibility requirements
      • Performance guidelines
      • Documentation and code quality
      • User‑experience consistency
      • Workflow and review process
    
    Write the principles in simple, beginner‑friendly language. 
    
    Each section should include a short rationale explaining why the rule matters.
    

What you get:

  • Code Studio creates the project constitution file (e.g., constitution.md) with architecture, coding style, documentation, testing, performance, and workflow.
  • These rules become the governing layer that every next step must respect.

Refer to the following image.

Creating project constitution

Step 3: Create the feature specification

The specification step turns your feature idea into a clear, written specification. You describe the feature from the user’s point of view (what they need and why), and the Code Studio generates organized spec files based on your description. This step focuses on what to build and why, not how.

What do you do?

Run this command in the Code Studio chat panel:

/speckit.specify You are a full-stack todo app developer. Implement a subtasks & nesting feature for a todo application.

Requirements:

  • Allow users to create subtasks within parent tasks
  • Display subtasks with collapsible/expandable toggle icons
  • Indent subtasks visually to show hierarchy
  • Add "Add Subtask" button within each task
  • Show subtask count on parent tasks
  • Display progress (e.g., "2 of 3 subtasks complete")
  • Support unlimited nesting levels

Implementation should include:

  1. Data structure to support nested tasks
  2. React component for task items with expand/collapse toggle
  3. UI state management for expanded/collapsed visibility
  4. Smooth animations for toggling
  5. Keyboard accessibility

Keep the code clean, modular, and performant. Use React hooks for state management.

What do you get?

  • Code Studio turns your description into structured spec artifacts (e.g., spec.md, requirements.md) with user needs, behavior, outcomes, and acceptance criteria, so the work is clear before any technical planning begins.
  • These specs become the source of truth that later steps will use.

See the following image for clear visuals.

Defining feature specifications

Clarify requirements (Optional)

The clarify option is used when your feature specification isn’t fully clear. You don’t have to figure out what’s missing; the AI reads your spec and finds unclear or incomplete parts for you.

In the Code Studio Chat panel, run the following command:

/speckit.clarify

Note: You don’t need to add anything else (like a description) with the command. This command scans your spec, finds unclear parts, and asks you simple questions to fix them.

Code Studio checks things like:

  • Feature behavior
  • User flow
  • Edge cases
  • Performance expectations
  • Missing decisions

It then asks you questions with clear options (A, B, C, D, etc.), so you can quickly choose the correct behavior.

This helps make your spec complete and unambiguous before moving on to planning or implementation. Kindly refer to the following GIF for a better understanding.

Using the clarify option for checking unclear instructions

Step 4: Create a technical implementation plan

The plan step takes your feature specification and turns it into a technical blueprint. Here, the AI decides how the feature should be built, based on the rules from your constitution and the details in your Spec.

What you do:

Run this plan command in Code Studio chat:

/speckit.plan
Create a technical plan for this feature based on the current specification and constitution. Include:

  • The architectural approach
  • Required components, modules, and file structure
  • The data model and how information should be stored
  • UI/UX flow and important interactions
  • State management decisions
  • Performance and accessibility considerations
  • Any external tools or libraries needed

Keep explanations simple and beginner-friendly.

If any technology choices or assumptions are missing, highlight them clearly so I can decide.

What you get:

  • Code Studio reads your specification and your constitution.
  • It generates a complete technical plan (e.g., plan.md, research.md) that explains how to build the features, including architecture decisions, file and folder structure, modules/components, data model, and performance & accessibility considerations.
  • These act as the “technical map” for building the feature.

The subsequent figure illustrates the concept.

Creating a technical implementation plan
Creating a technical implementation plan

Step 5: Generate the task list

The tasks step takes your specification and technical plan and breaks them into a list of small, ordered tasks. Each task represents one clear piece of work that the Code Studio will later implement.

What you do:

Run the following tasks command in Code Studio chat:

/speckit.tasks
  • Break the technical plan into clear, ordered tasks. 
  • Each task should be actionable and focused on one outcome.
  • Group related tasks into phases and make sure every task maps back to the spec and plan.

What you get:

  • Code Studio analyzes both the specification and the technical plan and breaks them into an ordered list of actionable tasks.
  • Tasks are grouped into phases; Each one has a clear purpose tied back to the spec and plan.
  • The result is a tasks.md file that acts as a step-by-step blueprint for implementation.

Consult the figure below for further details.

Creating tasks list

Analyze for consistency (Optional)

Before jumping into implementation, consider running an analysis to verify that your spec, plan, and tasks are fully aligned. This optional validation step can catch issues early, saving significant rework later.

In the Code Studio Chat panel, run:

/speckit.analyze  Run a project analysis for consistency and alignment across spec, plan, and tasks.

This command performs a comprehensive scan of all your artifacts, checking for:

  • Contradictions between your specification and technical plan.
  • Unmapped tasks that don’t correspond to documented requirements.
  • Missing implementation steps for features defined in your spec.

If inconsistencies are detected, Code Studio will suggest specific fixes. This validation is particularly valuable when your specification has evolved through multiple clarification rounds or when working with complex, multi-phase projects.

See the implementation in action below.

Analyzing the code for consistency
Analyzing the code for consistency

Step 6: Execute the implementation

The implementation step is where the Code Studio actually writes the code for your feature. It works through your task list, one task at a time, following your constitution (rules), specification (what to build), and plan (how to build it). This keeps changes small, focused, and easy to review.

What you do:

In the Code Studio Chat panel, run the following command:

/speckit.implement

Note: You don’t need to add anything else (like an extra description) with this command.

What you get:

  • Code Studio reads your tasks.md and completes the work one task at a time, making small, focused code changes. Throughout the process, it respects your constitution (rules), your specification (what to build), and your technical plan (how to build it), so changes stay on‑scope and consistent.
  • The result is actual code updates in your project that are easier to review and align with your standards.
  • Because implementation follows the task list in order and honors your guardrails, it avoids unrelated edits and “freestyling.”
  • This keeps the workflow predictable and reviewable; each task is independently testable and tied back to the spec and plan.

Watch how the feature works in action.

Implementing the code
Implementing the code
Creating a Todo List UI with Subtasks using Syncfusion Code Studio and SpecKit
Creating a Todo List UI with Subtasks using Syncfusion Code Studio and SpecKit

Why SpecKit works so well (When prompt‑only doesn’t)

Without specs, the traditional AI-assisted workflow looks like this:

  • You give a prompt with some details,
  • AI starts generating code,
  • Requirements become unclear,
  • Constant refactoring ensues,
  • Eventually, it ships something that mostly works.

SpecKit reverses this pattern entirely. Specifications become executable blueprints. Before writing a single line of code, you have a precise conversation with your future implementation. AI doesn’t guess, it executes against a contract you’ve already defined. The result: fewer iterations, clearer intent, and predictable output.

Faster development: More upfront planning = less refactoring, clearer decisions, fewer “what am I building?” moments.

Quick command reference

Command Purpose
/speckit.constitution Create the project rules the AI must follow.
/speckit.specify Turn your feature idea into a clear written specification.
/speckit.clarify Find unclear or missing details in your spec and ask questions to fix them.
/speckit.plan Generate a technical plan explaining how the feature will be built.
/speckit.tasks Break the plan into an ordered list of small, actionable tasks.
/speckit.analyze Check for contradictions or missing steps across spec, plan, and tasks.
/speckit.implement Build the feature by executing tasks step‑by‑step.

Best practices for using Speckit (Save time, save tokens, boost quality)

Follow these best practices to work efficiently with SpecKit and save both your time and token usage.

  1. Start small, then scale: Pick one small feature (e.g., “add shopping cart icon”) instead of your entire app. Learn the workflow without wasting tokens on massive, unfocused specs.
  2. Review before you move forward: After each command (constitution, specify, plan, tasks), spend some time reviewing what was generated. Catching spec errors early prevents expensive code rewrites.
  3. Use new sessions for each command: New sessions prevent context overload. Long conversations accumulate tokens exponentially; each message includes all previous context. Separate sessions keep token usage to a minimum. Start a fresh Code Studio chat session for each SpecKit command. Don’t run all commands in the same chat.
  4. Never skip workflow steps: Skipping steps means regenerating everything later when validation fails. Follow the full sequence: ConstitutionSpecifyPlanTasksImplement. Each step validates the previous one.
  5. Choose the right AI model: Code Studio lets you select different AI models. Use advanced coding models for SpecKit commands:Some recommended models: GPT-5, GPT-5.2 Codex, or Claude Sonnet 4.5.

    Some models like GPT-5-mini and Claude Haiku 4.5 are lower-end models that may struggle with complex specifications, planning, and implementation work.

    Advanced models understand context better, generate higher-quality specs and code, and require fewer retries.

  6. Use /speckit.clarify when the specification does not feel proper:  Vague requirements lead to vague implementations. Use the /speckit.clarify command when your feature specification isn’t fully clear.
  7. Run /speckit.analyze before big implementations: For complex features with many tasks, run the /speckit.analyze command after generating tasks but before implementation. It catches contradictions between your spec, plan, and tasks. Better to find these now than during code review.
  8. Be specific when refining outputs: When Code Studio generates specs, plans, or code that needs changes, tell it exactly what to fix. Reference specific sections or file names. Because vague requests force AI to guess what you want, leading to multiple back-and-forth rounds that waste tokens.

Bad: “Make the spec better” or “Improve this code.”

Good: “In spec.md, add error handling requirements for API timeouts” or “Refactor the login function to use async/await instead of callbacks.”

Frequently Asked Questions

What problem does Spec‑Driven Development (SDD) solve?

SDD prevents teams from “vibe coding,” where AI generates code from casual chat without shared rules or a plan, leading to inconsistent results, refactors, and drift from real requirements. By starting with a written spec and governing rules, you keep AI output on spec and consistent.

What is SpecKit, and how does it help?

SpecKit is a simple open‑source tool that turns your idea into clear steps, a spec, a plan, and tasks, so AI knows exactly what to build. It creates easy files like spec.md, plan.md, and tasks.md to keep everything organized.

How does Syncfusion Code Studio fit into this workflow?

Syncfusion Code Studio provides an AI‑powered environment and chat where you run the SpecKit steps: Constitution, Specify, Clarify, Plan, Tasks, Analyze, and Implement, so generation, validation, and implementation all happen in one place.

What are the core steps in the SpecKit flow?

The flow is: Constitution (project rules) → Specify (user‑focused spec) → Clarify (fix gaps) → Plan (technical blueprint) → Tasks (ordered actionable work) → Analyze (optional consistency check) → Implement (task‑by‑task coding).

Any best practices to keep tokens down and quality up?

Yes. Start small, review outputs at each step, and use new chat sessions per command. Don’t skip steps, select stronger models for complex work, use the /speckit.clarify command when specs feel vague, and run the /speckit.analyze command before big implementations. Be precise when asking for refinements.

Final thoughts: Ship cleaner, faster, and with zero guessing

Thanks for reading! When we jump straight into coding, projects often drift, requirements get fuzzy, refactors pile up, and the result “mostly works” but doesn’t match what the team intended. Starting with a written spec flips this pattern: you define what you need before code is generated, so the AI follows a stable plan rather than guessing.

SpecKit + Code Studio makes that spec‑first approach practical: you set rules (Constitution), describe the feature (Specify), turn it into a plan (Plan), break it into tasks (Tasks), optionally check alignment (Analyze), and then let AI implement changes step‑by‑step (Implement). This structure reduces surprises, cuts down refactoring, and keeps output consistent with your standards.

Ready to get started? Install Syncfusion Code Studio and set up SpecKit today, and experience the difference between prompting AI and governing it.

If you have any questions, contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Create Fillable PDF Forms in C# for Server-Side .NET Apps

1 Share

How to Create Fillable PDF Forms in C# for Server-Side .NET Apps

TL;DR: Developers often need a reliable way to generate structured, interactive PDF forms on the server to support workflows like registrations, onboarding, compliance, and data capture. Server-side PDF form creation is a common requirement in enterprise applications where accuracy, security, and automation are essential. This blog explores the challenges developers face when working with fillable PDF forms in .NET and provides a practical approach to handling them effectively.

Struggling to manage form data inside the browser? Need a secure way to store and process that data for analytics or business workflows? Many developers run into the same problem.

A reliable solution is to generate and fill PDF forms on the server. This approach lets you:

  • Store form data in a structured, reusable format.
  • Automate processes for registrations, agreements, and surveys.
  • Keep sensitive information on the server, not in the browser.

The Syncfusion® .NET PDF Library helps you do this with ease. It supports creating and filling interactive PDF forms (AcroForms) entirely on the server. You can generate form fields, prefill them with default or user‑submitted values, and return secure, fillable PDFs, without exposing any sensitive data to the client.

In this blog, we’ll walk through the workflow for creating and filling PDF forms on the server using an ASP.NET Core application. Let’s get started.

Why server-side PDF form creation is critical

Applications that collect large volumes of form data must prioritize security, consistency, and performance. When all processing happens in the browser, complex forms can slow down the UI, reduce responsiveness, and in some cases freeze the page. Server‑side generation avoids these issues by keeping the workload off the client.

Here are the key reasons why server-side form creation is the better approach:

  • Keep sensitive data secure: All processing stays on the server, reducing exposure and minimizing the risk of leaks.
  • Centralized logic and control: Gives you a single point of control, making it easier to maintain, update, and enforce business rules.
  • Offload heavy PDF generation: PDF creation is resource‑intensive; performing it server‑side keeps client devices fast and responsive.
  • Scalable and automated: Server‑side processing can efficiently generate thousands of interactive forms and supports automation across large business processes.

Create an ASP.NET Core app and design an HTML form

To build a workflow for creating and filling PDF forms on the server, start by setting up a simple project. This application will collect user details through an HTML form and use the Syncfusion PDF Library to generate a PDF with form fields on the server.

Step 1: Create an ASP.NET Core application

Open the terminal in Visual Studio Code and run the following command to create your ASP.NET Core application:

dotnet new mvc -n FormFieldsSample
cd FormFieldsSample

Step 2: Add Syncfusion PDF Library package

To work with PDF documents and create fillable PDF forms, you need to add the Syncfusion PDF Library to your project. This library provides APIs for creating, editing, and filling PDF form fields on the server.

dotnet add package Syncfusion.Pdf.Net.Core

Step 3: Create a model class for form fields

Next, create a model class that defines fields like Name, Email, and Address. It binds the HTML form input to the controller for processing and PDF generation.

Here’s how you can do it in code:

namespace FormFieldsSample.Models
{
    public class UserRegisterationModel
    {
        public string Name { get; set; } = string.Empty;
        public string EmailID { get; set; } = string.Empty;
        public string PhoneNumber { get; set; } = string.Empty;
        public DateTime DateOfBirth { get; set; } = DateTime.Today;
        public string Gender { get; set; } = string.Empty;
        public string MaritalStatus { get; set; } = string.Empty;
        public string Occupation { get; set; } = string.Empty;
        public string Address { get; set; } = string.Empty;
        public bool ReceiveNotifications { get; set; }

        public SelectList GenderList { get; set; } = new SelectList(new[] { "Male", "Female", "Other" });
        public SelectList MaritalStatusList { get; set; } = new SelectList(new[] { "Single", "Married", "Other" });
        public SelectList OccupationList { get; set; } = new SelectList(new[] { "Doctor", "Teacher", "Software Engineer", "Student", "Entrepreneur", "Other" });
    }
}

Step 4: Design the HTML form and initialize default values

Once your model is ready, the next step is to build the HTML form in Index.cshtml. This is where users enter their details, and each field is tied directly to the model properties you just created.

To make the demo easier to test, you can also load the form with default values from the controller. This way, when the page opens, you don’t have to type everything from scratch, you can immediately see how the fields bind, submit the form, and verify that those values show up correctly in the generated PDF.

<div class="control-section" style="max-width:900px;margin:auto;">
    <h2 style="margin-bottom:18px">Survey and Research Registration Form</h2>
    <h4 style="color:#666;margin-top:0;margin-bottom:18px;">Registration Details</h4>
    @using (Html.BeginForm("CreateForm", "Home", FormMethod.Post))
    {
        <div class="form-group">
            <label style="font-size:20px;margin-bottom:14px;">Full Name</label>
            @Html.TextBoxFor(m => m.Name, new { @class = "form-control" })

            <label style="font-size:20px;margin-bottom:14px">Email Address</label>
            @Html.TextBoxFor(m => m.EmailID, new { @class = "form-control", @type = "email" })

            <label style="font-size:20px;margin-bottom:14px">Phone Number</label>
            @Html.TextBoxFor(m => m.PhoneNumber, new { @class = "form-control", @type = "tel" })

            <label style="font-size:20px;margin-bottom:14px">Date of Birth</label>
            @Html.TextBoxFor(m => m.DateOfBirth, "{0:yyyy-MM-dd}", new { @class = "form-control", @type = "date" })

            <label style="font-size:20px;margin-bottom:14px">Gender</label>
            @Html.DropDownListFor(m => m.Gender, Model.GenderList, new { @class = "custom-select" })

            <label style="font-size:20px;margin-bottom:14px">Marital Status</label>
            @Html.DropDownListFor(m => m.MaritalStatus, Model.MaritalStatusList, new { @class = "custom-select" })

            <label style="font-size:20px;margin-bottom:14px">Occupation</label>
            @Html.DropDownListFor(m => m.Occupation, Model.OccupationList, new { @class = "custom-select" })

            <label style="font-size:20px;margin-bottom:14px">Address</label>
            @Html.TextBoxFor(m => m.Address, new { @class = "form-control" })

            <div class="form-check" style="margin-top:10px;">
                @Html.CheckBoxFor(m => m.ReceiveNotifications, new { @class = "form-check-input" })
                <label class="form-check-label">Would you like to get notifications from us?</label>
            </div>
        </div>

        <div style="margin-top:16px;">
            <input type="submit" class="btn btn-primary" value="Create Form" style="width:150px;" />
        </div>
    }
</div>

Initialize default values in HomeController

In the HomeController, the Index action fills in sample data such as name, email, phone number, date of birth, and all the dropdown lists. When the view loads, the form is already populated with sample values, so you can immediately submit it and test the PDF output.

public IActionResult Index()
{
    var model = new UserRegisterationModel
    {
        Name = "John Doe",
        EmailID = "john.doe@example.com",
        PhoneNumber = "011-54893-232",
        DateOfBirth = new DateTime(1995, 5, 12),
        Gender = "Male",
        MaritalStatus = "Single",
        Occupation = "Software Engineer",
        Address = "123 Main Street, Chennai",
        ReceiveNotifications = true,
        GenderList = new SelectList(new[] { "Male", "Female", "Other" }),
        MaritalStatusList = new SelectList(new[] { "Single", "Married", "Other" }),
        OccupationList = new SelectList(new[] { "Doctor", "Teacher", "Software Engineer", "Student", "Entrepreneur", "Other" })
    };
    return View(model);
}

Step 5: Launch the application in the browser

Once the form and default values are set up, the next step is to build and run the application using the following commands:

dotnet build
dotnet run
Registration form filled with default values
Registration form filled with default values

Convert the HTML form to interactive PDF forms on the server

Once your HTML form loads with default values, the next step is to generate a PDF document with interactive form fields. The Syncfusion .NET PDF Library handles this by letting you programmatically add fields such as text boxes, radio buttons, combo boxes, checkboxes, list boxes, and more, directly into a PDF document.

Let’s see how to convert the HTML Forms to Interactive PDF Forms:

  1. First, create a new PDF document using the PdfDocument class.
  2. Add a blank page using the PdfPage class.
  3. Add text box fields for name, email address, and phone number using the PdfTextBoxField class.
  4. Then, add the radio button group for gender selection and marital status using the PdfRadioButtonListField class, and add the list of items using the PdfRadioButtonListItem class.
  5. Add a combo box field for occupation using the PdfComboBoxField class, and add the list of items using the PdfListFieldItem class.
  6. Add a checkbox field for notifications using the PdfCheckBoxField class.
  7. Finally, assign values to each field as you create it.

The code below shows the full implementation used to generate the interactive PDF on the server:

public IActionResult CreateForm(UserRegisterationModel model)
{
    // Create a new PDF document
    using (PdfDocument pdfDocument = new PdfDocument())
    {
        // Add a new page to the PDF document
        PdfPage pdfPage = pdfDocument.Pages.Add();
        // Get graphics object to draw text and labels on the page
        PdfGraphics pdfGraphics = pdfPage.Graphics;
        //Set the standard font.
        PdfFont titleFont = new PdfStandardFont(PdfFontFamily.Helvetica, 18, PdfFontStyle.Bold);
        PdfFont subtitleFont = new PdfStandardFont(PdfFontFamily.Helvetica, 12, PdfFontStyle.Bold);
        PdfFont labelFont = new PdfStandardFont(PdfFontFamily.Helvetica, 11);
        PdfFont fieldFont = new PdfStandardFont(PdfFontFamily.Helvetica, 11);
        // Layout settings for positioning labels and field
        float labelX = 40f;       // Label column
        float fieldX = 220f;      // Field column
        float rowHeight = 30f;    // Vertical spacing
        float fieldWidth = 300f;  // Field width
        float fieldHeight = 22f;  // Field height
        float currentY = 40f;     // Start position
        // Title and subtitle
        pdfGraphics.DrawString("Survey and Research Registration Form", titleFont, PdfBrushes.Black, new PointF(labelX, currentY));
        currentY += 28f;
        pdfGraphics.DrawString("Registration Details", subtitleFont, PdfBrushes.Black, new PointF(labelX, currentY));
        currentY += 28f;
        // Helper method for text fields
        void AddTextField(string fieldName, string labelText, string value)
        {
            // Draw label text
            pdfGraphics.DrawString(labelText, labelFont, PdfBrushes.Black, new PointF(labelX, currentY));
            // Create a text box field and set its properties
            var textField = new PdfTextBoxField(pdfPage, fieldName)
            {
                Bounds = new RectangleF(fieldX, currentY - 4, fieldWidth, fieldHeight),
                Font = fieldFont,
                ToolTip = labelText,
                Text = value ?? string.Empty
            };
                // Add the text field to the PDF form
                pdfDocument.Form.Fields.Add(textField);
                // Move to the next row
                currentY += rowHeight;
        }
        // Add text fields
        AddTextField("FullName", "Full Name:", model.Name);
        AddTextField("Email", "Email Address:", model.EmailID);
        AddTextField("Phone", "Phone Number:", model.PhoneNumber);
        AddTextField("DateOfBirth", "Date of Birth:", model.DateOfBirth.ToString("yyyy-MM-dd"));
        // Add Gender selection using radio buttons
        pdfGraphics.DrawString("Gender:", labelFont, PdfBrushes.Black, new PointF(labelX, currentY));
        var genderField = new PdfRadioButtonListField(pdfPage, "Gender");
        pdfDocument.Form.Fields.Add(genderField);
        float radioY = currentY - 4;
        float radioSpacing = 110f;
        // Create radio button options for Gender
        var maleOption = new PdfRadioButtonListItem("Male") { Bounds = new RectangleF(fieldX, radioY, 12, 12) };
        var femaleOption = new PdfRadioButtonListItem("Female") { Bounds = new RectangleF(fieldX + radioSpacing, radioY, 12, 12) };
        var otherOption = new PdfRadioButtonListItem("Other") { Bounds = new RectangleF(fieldX + radioSpacing * 2, radioY, 12, 12) };
        genderField.Items.Add(maleOption);
        genderField.Items.Add(femaleOption);
        genderField.Items.Add(otherOption);
        // Draw labels next to radio button
        pdfGraphics.DrawString("Male", labelFont, PdfBrushes.Black, new PointF(maleOption.Bounds.Right + 8, maleOption.Bounds.Top + (maleOption.Bounds.Height - labelFont.Size) / 2));
        pdfGraphics.DrawString("Female", labelFont, PdfBrushes.Black, new PointF(femaleOption.Bounds.Right + 8, femaleOption.Bounds.Top + (femaleOption.Bounds.Height - labelFont.Size) / 2));
        pdfGraphics.DrawString("Other", labelFont, PdfBrushes.Black, new PointF(otherOption.Bounds.Right + 8, otherOption.Bounds.Top + (otherOption.Bounds.Height - labelFont.Size) / 2));

        genderField.SelectedIndex = model.Gender == "Male" ? 0 : model.Gender == "Female" ? 1 : 2;
        currentY += rowHeight;
        // Add Marital Status selection using radio buttons
        pdfGraphics.DrawString("Marital Status:", labelFont, PdfBrushes.Black, new PointF(labelX, currentY));
        var maritalField = new PdfRadioButtonListField(pdfPage, "MaritalStatus");
        pdfDocument.Form.Fields.Add(maritalField);
        // Create radio button options for Marital Status
        var singleOption = new PdfRadioButtonListItem("Single") { Bounds = new RectangleF(fieldX, currentY - 4, 12, 12) };
        var marriedOption = new PdfRadioButtonListItem("Married") { Bounds = new RectangleF(fieldX + radioSpacing, currentY - 4, 12, 12) };
        var otherMaritalOption = new PdfRadioButtonListItem("Other") { Bounds = new RectangleF(fieldX + radioSpacing * 2, currentY - 4, 12, 12) };
        maritalField.Items.Add(singleOption);
        maritalField.Items.Add(marriedOption);
        maritalField.Items.Add(otherMaritalOption);
        pdfGraphics.DrawString("Single", labelFont, PdfBrushes.Black, new PointF(singleOption.Bounds.Right + 8, singleOption.Bounds.Top + (singleOption.Bounds.Height - labelFont.Size) / 2));
        pdfGraphics.DrawString("Married", labelFont, PdfBrushes.Black, new PointF(marriedOption.Bounds.Right + 8, marriedOption.Bounds.Top + (marriedOption.Bounds.Height - labelFont.Size) / 2));
        pdfGraphics.DrawString("Other", labelFont, PdfBrushes.Black, new PointF(otherMaritalOption.Bounds.Right + 8, otherMaritalOption.Bounds.Top + (otherMaritalOption.Bounds.Height - labelFont.Size) / 2));
        maritalField.SelectedIndex = model.MaritalStatus == "Single" ? 0 : model.MaritalStatus == "Married" ? 1 : 2;
        currentY += rowHeight;
        // Add combo box fields (Occupation)
        pdfGraphics.DrawString("Occupation:", labelFont, PdfBrushes.Black, new PointF(labelX, currentY));
        var occupationField = new PdfComboBoxField(pdfPage, "Occupation")
        {
            Bounds = new RectangleF(fieldX, currentY - 4, fieldWidth, fieldHeight),
            Font = fieldFont,
            ToolTip = "Occupation"
        };
        occupationField.Items.Add(new PdfListFieldItem("Doctor", "Doctor"));
        occupationField.Items.Add(new PdfListFieldItem("Teacher", "Teacher"));
        occupationField.Items.Add(new PdfListFieldItem("Software Engineer", "Software Engineer"));
        occupationField.Items.Add(new PdfListFieldItem("Student", "Student"));
        occupationField.Items.Add(new PdfListFieldItem("Entrepreneur", "Entrepreneur"));
        occupationField.Items.Add(new PdfListFieldItem("Other", "Other"));
        occupationField.SelectedIndex = model.Occupation == "Doctor" ? 0 :
                                        model.Occupation == "Teacher" ? 1 :
                                        model.Occupation == "Software Engineer" ? 2 :
                                        model.Occupation == "Student" ? 3 :
                                        model.Occupation == "Entrepreneur" ? 4 : 5;
        pdfDocument.Form.Fields.Add(occupationField);
        currentY += rowHeight;
        // Add Address field
        AddTextField("Address", "Address:", model.Address);
        // Add Notifications checkbox
        pdfGraphics.DrawString("Notifications:", labelFont, PdfBrushes.Black, new PointF(labelX, currentY));
        var notifyField = new PdfCheckBoxField(pdfPage, "ReceiveNotifications")
        {
            Bounds = new RectangleF(fieldX, currentY - 4, 12, 12),
            ToolTip = "Receive Notifications",
            Checked = model.ReceiveNotifications
        };
        pdfDocument.Form.Fields.Add(notifyField);
    }
}

Running this code example will generate a PDF that closely resembles the image displayed below.

PDF form generated on the server with interactive fields

Download and save the generated PDF form

After creating the PDF form fields, the final step is to make the document downloadable. Syncfusion’s .NET PDF Library lets you save the form to a stream and return it as a file, allowing you to deliver fillable PDFs to users with just a few lines of code.

Below is the code you need:

//Create a new memory stream 
MemoryStream stream = new MemoryStream();
pdfDocument.Save(stream);
//If the position is not set to '0' then the PDF will be empty.
stream.Position = 0;

FileStreamResult fileStreamResult = new FileStreamResult(stream, "application/pdf");
fileStreamResult.FileDownloadName = "SurveyRegistrationForm.pdf";

return fileStreamResult;
Downloadable PDF form
Downloadable PDF form

GitHub references

Want to explore the full sample? Check out the demo on GitHub to learn how to create and fill PDF forms on the server using the Syncfusion PDF Library.

Frequently Asked Questions

Can I set default values for form fields while creating them?

Yes, you can assign values using properties like Text, Selected Index, or Checked for the respective fields.

Is it possible to create fillable PDF forms from scratch?

Yes, start with a new Pdf document and add interactive fields programmatically.

Can I modify or add fields to an existing PDF on the server?

Yes, load the PDF using PdfLoadedDocument, and add or edit fields as needed.

How do I make the form non-editable after filling?

Flatten the form fields using the library’s flattening feature before saving.

Can I validate PDF form fields on the server before saving?

Yes, perform validation in your server logic before generating the PDF.

Conclusion

Thank you for reading! In this blog, we explored how to convert HTML forms into interactive PDF form fields using the Syncfusion .NET PDF Library on the server. The library also lets you edit existing form fields, add signatures for secure approvals, and flatten forms to make them non‑editable. You can also import form data from external sources or export it for integration with other systems, making it easier to manage your digital workflows.

For a detailed explanation with code examples on creating, filling, editing, and flattening PDF forms, check out this blog: Create, Fill, and Edit PDF Forms Using C#.

Want to see it in action?  Check out our Live Demo Page.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Why the cloud is not a disaster recovery strategy for your critical databases

1 Share

When AWS stumbled – twice – in October 2025, many teams discovered that “we are in the cloud” is not the same as “we have disaster recovery”.

Applications went offline, customer-facing portals returned errors, and internal dashboards that teams rely on every morning failed to load.

Most of those systems were already running on managed cloud services. They had multi-AZ databases, auto scaling groups, and health checks. What they did not have was a clear answer to three simple questions:

  • How much data can we afford to lose?

  • How long can we be down?

  • Where do we run if this region doesn’t come back soon?

That gap between infrastructure and intent is where outages turn into business incidents. I see this pattern often when I talk to engineering and operations teams. The conversation usually goes like this:

“We are on <insert your favorite cloud provider>. Everything is on managed services, so we are covered for DR.”

Cloud is a platform and disaster recovery is a responsibility. Managed services help, but they do not own your RTO (recovery time objective) and RPO (recovery point objective) – you do.

Why the cloud is an environment – not a plan

It helps to separate two ideas that often get blurred:

  • The cloud is a set of capabilities: regions, availability zones, ease of deployment, snapshots, object storage, APIs, and automation.

  • Disaster recovery is a set of decisions: objectives (RTO/RPO), topologies, runbooks, owners, and regular drills.

You can be “100% in the cloud” and still have:

  • A single-region database with no tested cross-region copy.

  • Backups that no one has tried to restore in the last year.

  • All critical services, such as identity, DNS, messaging, and even your backup catalog or backup server, are tied to that same region.

  • No shared understanding of what “acceptable downtime” or “acceptable loss of data” actually means for the business.

All of these are examples of treating the platform as if it were the strategy. From the outside, it looks modern and robust. Under stress, however, it behaves like a traditional single-data-center setup – just with different logos on the status page.

Why high availability is not the same as disaster recovery

Managed databases and orchestrated clusters make it much easier to keep instances predictable. Multi-AZ deployments, auto-healing, and synchronous replication are valuable. They reduce the impact of hardware failures and local issues.

Managed services improve availability, but do not design your recovery

They handle patching, failover within a region, and some backups, but do not:

  • Define your RTO/RPO

  • Decide cross-region or cross-account replicas

  • Test restores or run DR drills

  • Coordinate application failover and dependencies

They do not solve for:

  • Control-plane or networking problems that affects the whole region.

  • A bad deployment that corrupts data and replicates that corruption instantly.

  • A compromised account where an attacker drops tables or changes configuration.

  • Human errors that runs a destructive command on the primary.

In all of these situations, “my managed database is multi-AZ” gives you very little comfort. You still need a known-good copy in another fault domain, a way to promote it, and a set of steps that people can execute under pressure.

Why replication is not the answer

Cross-region replication doesn’t even solve this. Replication replays every change, including the bad ones: a wrong DELETE, a buggy migration, or corrupted data from an application bug will be copied to every replica as fast and reliably as good data. That’s why your real last line of defense is not “more replicas” – it is backups and tested restore procedures. Only a backup taken before the damage, and a rehearsed way to bring it back online, can protect you from this class of failure.

Availability keeps the lights on when small things go wrong. Disaster recovery is how you handle the day when something big does. In other words: Availability is preemptive in nature, Disaster Recovery is reactive. 

Important: replication protects you from infrastructure failures, and disaster recovery protects you from your own data mistakes. Logical corruption and bad writes are faithfully replicated across all nodes. Backups and restore drills are what protect you from those.

Simple Talk is brought to you by Redgate Software

Take control of your databases with the trusted Database DevOps solutions provider. Automate with confidence, scale securely, and unlock growth through AI.
Discover how Redgate can help you

What does a real disaster recovery strategy look like?

A proper DR strategy is surprisingly straightforward on paper. The difficulty is implementing it in practice and running repeated drills to ensure it works.

It starts with business objectives rather than tools. For each critical system, you sit down with the people who own the outcome and agree on two numbers:

  • RTO (Recovery Time Objective): How long can this system be down?

  • RPO (Recovery Point Objective): How much data, in time, can we afford to lose?

In practice, that RPO number is enforced by how you handle backups and transaction logs. For PostgreSQL, it comes down to how often you take base backups, how frequently you archive WAL (write-ahead logging), and how reliably you can restore to a specific point in time. If you claim a 15-minute RPO but your backups and WAL archiving only support restoring to within an hour, your real RPO is an hour; no matter what the slide deck says.

Why these numbers rarely match across workloads

A reporting database might tolerate a few hours of downtime and some data loss. However, a payments ledger probably cannot.

With those numbers in hand, you can design:

  • Whether you need a warm standby in another region or another provider.

  • How you will move traffic there (DNS, load balancers, application configuration).

  • How backups flow: which region they land in, which account owns them, and how long they are retained.

  • What the runbook looks like when someone says, “We are invoking DR now”.

The tools you pick, whether RDS or FlexiServer, self-managed PostgreSQL with Patroni, Kubernetes operators, or third-party backup software, are an implementation detail. Strategy is independent of brand names.

The uncomfortable question: when did you last restore?

The surest way to expose the gap between “we have backups” and “we have disaster recovery” is to ask one question: “when did we last perform a full restore and cut a real application over to it?”

Not a theoretical walkthrough. Not a developer restoring a subset of data on their laptop. A timed, documented exercise that goes from “assume region A has failed” to “users are now served from region B”.

When teams run this exercise for the first time, a few things often appear:

  • Restores take longer than expected, pushing the real RTO far beyond the number written in slides.

  • Application configuration is hard-coded to a single region or endpoint.

  • Some dependencies (identity provider, message broker, and payment gateway integration, etc.) were never included in the DR thinking.

  • Ownership is fuzzy: it is not clear who can make the call to fail over, or who coordinates the transition.

None of this is a criticism of the teams. This is a common pattern: it simply happens when we assume the cloud will take care of everything and never rehearse the opposite.

How to use the cloud properly for disaster recovery

The irony is that cloud platforms are excellent foundations for disaster recovery when used intentionally. You can:

  • Spin up parallel environments in another region using infrastructure-as-code.

  • Create cross-region replicas for databases and storage with a few configuration changes.

  • Store backups in a separate account and region, reducing the blast radius of a compromise.

  • Use central logging and observability to monitor both primary and DR sites with the same tooling.

The important shift is mental, where instead of saying “we are on <insert your favorite cloud provider>. Everything is on managed services. So we are covered for DR.”, you say “we use <insert your favorite cloud provider> to implement our DR strategy, which looks like this”.

That strategy has names, diagrams, and runbooks. It is reviewed when systems change and, a few times a year, someone actively tests it by pushing the buttons and measuring what happens.

Bringing it back to your own systems

If you want a quick sense check of where you stand today, you can do a simple exercise with your team:

  • Pick one system that really matters to the business.

  • Write down its RTO and RPO in plain language.

  • Draw the current architecture on a single page, including regions, accounts, databases, storage, and key dependencies.

  • On that diagram, mark where replicas live and where backups and WAL actually land (list out region, account, and service).

  • Next to your RPO, write down which backups and WAL streams you would use to meet it, and how you would restore them.

  • Describe, step by step, what you would do if the primary region were unavailable for 12 hours or if you discovered that the data in that system was corrupted.

If any of those steps are vague or rely on “the managed service will sort it out” or “the replica will save us” instead of a clear, tested restore path, you’ve just found the places where cloud and disaster recovery have been quietly conflated. Should you discover gaps while mapping your DR, write them down, as they are the starting point of a real strategy.

Summary and next steps

Cloud is a powerful platform. Disaster Recovery is a promise you make to the business about how much it will hurt when things go wrong: how long systems can be down and how much data can be lost. You keep that promise with architecture: replicas, backups and WAL archiving, cross-region copies, and rehearsed runbooks. Treat them as two separate things, and then deliberately combine them.

The post Why the cloud is not a disaster recovery strategy for your critical databases  appeared first on Simple Talk.

Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Windows stack limit checking retrospective: MIPS

1 Share

Last time, we looked at how the 80386 performed stack probing on entry to a function with a large local frame. Today we’ll look at MIPS, which differs in a few ways.

; on entry, t8 = desired stack allocation

chkstk:
    sw      t7, 0(sp)           ; preserve register
    sw      t8, 4(sp)           ; save allocation size
    sw      t9, 8(sp)           ; preserve register

    li      t9, PerProcessorData ; prepare to get thread bounds
    bgez    sp, usermode        ; branch if running in user mode
    subu    t8, sp, t8          ; t8 = new stack pointer (in delay slot)

    lw      t9, KernelStackStart
    b       havelimit
    subu    t9, t9, KERNEL_STACK_SIZE ; t9 = end of stack (in delay slot)

usermode:
    lw      t9, Teb(t9)         ; get pointer to current thread data
    nop                         ; stall on memory load
    lw      t9, StackLimit(t9)  ; t9 = end of stack
    nop                         ; burn the delay slot

havelimit:
    sltu    t7, t8, t9          ; need to grow the stack?
    beq     zero, t7, done      ; N: then nothing to do
    li      t7, -PAGE_SIZE      ; prepare mask (in delay slot)
    and     t8, t8, t7          ; round down to nearest page

probe:
    subu    t9, t9, PAGE_SIZE   ; move to next page
    bne     t8, t9, probe       ; loop until done
    sw      zero, 0(t9)         ; touch the memory (in delay slot)

done:
    lw      t7, 0(sp)           ; restore
    lw      t8, 4(sp)           ; restore
    j       ra                  ; return
    lw  t9, 8(sp)               ; restore (in delay slot)

The MIPS code is trickier to ready because of the pesky delay slot. Recall that delay slots execute even if the branch is taken.¹

One thing that is different here is that the code short-circuits if the stack has already expanded the necessary amount. The x86-32 version always touches the stack, even if not necessary, but the MIPS version does the work only if needed. It’s often the case that a program allocates a large buffer on the stack but ends up using only a small portion of it, and the short-circuiting avoids faulting in pages and cache lines unnecessarily. But to do this, we need to know how far the stack has already expanded, and that means checking a different place depending on whether it’s running on a user-mode stack or a kernel-mode stack.

Note that the probe loop faults the memory in by writing to it rather than reading from it.² This is okay because we already know that the write will expand the stack, rather than write into an already-expanded stack, and nobody can be expanding our stack at the same time because the stack belongs to this thread. (If we hadn’t short-circuited, then a write would not be correct, because the write might be writing to an already-present portion of the stack.)

On the MIPS processor, the address space is architecturally divided exactly in half with user mode in the lower half and kernel mode in the upper half. The code relies on this by testing the upper bit of the stack pointer to detect whether it is running in user mode or kernel mode.³

Another difference between the MIPS version and the 80386 version is that the MIPS version validates that the stack can expand, but it returns with the stack pointer unchanged. It leaves the caller to do the expansion.

I deduced that a function prologue for a function with a large stack frame might look like this:

    sw  ra, 12(sp)      ; save return address in home space
    li  t8, 17320       ; large stack frame
    br  chkstk          ; expand stack if necessary
    lw  ra, 12(sp)      ; recover original return address
    sub sp, sp, t8      ; create the local frame
    sw  ra, nn(sp)      ; save return address in its real location

    ⟦ rest of function as usual ⟧

The big problem is finding a place to save the return address. From looking the implementation of the chkstk function, I see that it is going to use home space slots 0, 4, and 8, but it doesn’t use slot 12, so we can use it to save our return address before it gets overwritten by the br.

Later, I realized that the code can save the return address in the t9 register, since that is a scratch register according to the Windows calling convention, but the chkstk function nevertheless dutifully preserves it.⁴

    move t9, ra         ; save return address in t9
    li  t8, 17320       ; large stack frame
    br  chkstk          ; expand stack if necessary
    sub sp, sp, t8      ; create the local frame
    sw  t9, nn(sp)      ; save return address in its real location

    ⟦ rest of function as usual ⟧

However, I wouldn’t be surprised if the compiler used the first version, just in case somebody is using a nonstandard calling convention that passes something meaningful in t9.

Next time, we’ll look at PowerPC, which has its own quirk.

¹ Delay slots were a popular feature in early RISC days to avoid a pipeline bubble by saying, “Well, I already went to the effort of fetching and decoding this instruction; may as well finish executing it.” Unfortunately, this clever trick backfired when newer versions of the processor had deeper pipelines or multiple execution units. If you still wanted to avoid the pipeline bubble, you would have to add more delay slots, but three delay slots is getting kind of silly, and it would break compatibility with code written to the v1 processor. Therefore, processor developers just kept the one delay slot for compatibility and lived with the pipeline bubble for the other nonexistent delay slots. (To hide the bubble, they added branch prediction.)

² I don’t know why they chose to write instead of read. Maybe it’s to avoid an Address Sanitizer error about reading from memory that was never written?

³ This code is compiled into the runtime support library that can be used in both user mode and kernel mode, so it needs to detect what mode it’s in. An alternate design would be for the compiler to offer two versions of the function, one for user mode and one for kernel mode, and make you specify at link time which version you wanted.

⁴ The chkstk function preserves all registers so that it can be used even in functions with nonstandard calling conventions. Okay, it preserves almost all registers. It doesn’t preserve the assembler temporary at, which is used implicitly by the li instruction. But nobody expects the assembler temporary to be preserved. It also doesn’t preserve the “do not touch, reserved for kernel” registers k0 and k1, which is fine, because the caller shouldn’t be touching them either!

The post Windows stack limit checking retrospective: MIPS appeared first on The Old New Thing.

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Code Blocks in Blazor Rich Text Editor: Setup and Best Practices

1 Share

Code Blocks in Blazor Rich Text Editor: Setup and Best Practices

TL;DR: The Blazor Rich Text Editor’s code block feature quietly transforms technical writing by delivering clean, readable, and customizable code formatting with minimal effort. A sleek language selector, intuitive toolbar controls, smart indentation, and clean‑paste behavior elevate the editing experience far beyond basic text formatting. The result is a lightweight but powerful tool that enhances tutorials, documentation, and developer‑focused content with surprising ease.

If you write technical content, such as blogs, tutorials, how‑to guides, or documentation, you know the importance of clear, readable code formatting.

That’s where the Syncfusion Blazor Rich Text Editor’s Code Block feature stands out.

In this guide, you’ll discover how to add and customize code blocks inside the Blazor Rich Text Editor. We’ll also learn the practical tips, must‑know shortcuts, and configuration examples that elevate your content experience.

Let’s dive in!

Why code blocks matter more than you think

Developers don’t read paragraphs first; they hunt for code. Code blocks help your readers:

  • Understand examples faster.
  • Copy code without losing formatting.
  • Scan your content more easily.
  • Trust your technical accuracy.

What you can do with code blocks in Blazor RTE

The code block feature in the Syncfusion Blazor Rich Text Editor enables users to insert and format code snippets directly within the editor’s content area.

When you add the CodeBlock command to the toolbar, it appears as a toggleable button that applies the code-friendly formatting, typically using the <pre> and <code> HTML tags, to either selected text or a newly created block.

This makes it much easier to display programming code, scripts, or structured text in a monospaced font with a clear, distinguishable background, significantly improving readability.

Beyond its basic functionality, the feature offers extensive customization options. We can:

  • Add the InsertCode button to the toolbar.
  • Style the code block with custom CSS, such as adjusting background color, padding, or syntax highlighting.
  • Configure the editor to handle code blocks in both HTML and Markdown modes.
  • Integrate third-party libraries, such as CodeMirror, for advanced code editing or syntax highlighting.

Thanks to this flexibility, the Code Block feature becomes especially valuable for technical blogs, documentation platforms, learning portals, and any app that needs clean, professional code presentation. Its ease of use and customization features make it a powerful asset for building developer-friendly content experiences.

Prerequisites

To get started, ensure the following prerequisites are installed and configured on your development machine:

How to add code blocks in the Blazor Rich Text Editor

Follow these steps to add and configure the code block feature in the Blazor Rich Text Editor.

Step 1: Create a Blazor server application

Start by creating a new Blazor Server application using Visual Studio. Next, install the Syncfusion.Blazor.RichTextEditor NuGet package, configure the necessary style and script references, and follow the steps outlined in the getting started documentation.

Step 2: Add the Rich Text Editor to your application

Now, add the following code to the Index.razor page to include the Blazor Rich Text Editor component.

@using Syncfusion.Blazor.RichTextEditor

<SfRichTextEditor>
</SfRichTextEditor>

Step 3: Add the Code block tool item to the toolbar

Configure the toolbar to include the CodeBlock command. This adds the Code Block tool alongside other formatting options, such as Bold, Italic, and more.

@using Syncfusion.Blazor.RichTextEditor

<SfRichTextEditor ID="code-block">
    <RichTextEditorToolbarSettings Items="Tools">
    </RichTextEditorToolbarSettings>
</SfRichTextEditor>

@code {
    private List<ToolbarItemModel> Tools = new List<ToolbarItemModel>()
    {
        new ToolbarItemModel() { Command = ToolbarCommand.CodeBlock },
        new ToolbarItemModel() { Command = ToolbarCommand.Bold },
        new ToolbarItemModel() { Command = ToolbarCommand.Italic },
        new ToolbarItemModel() { Command = ToolbarCommand.Underline },
        new ToolbarItemModel() { Command = ToolbarCommand.CreateLink },
        new ToolbarItemModel() { Command = ToolbarCommand.Image },
        new ToolbarItemModel() { Command = ToolbarCommand.CreateTable },
        new ToolbarItemModel() { Command = ToolbarCommand.Separator },
        new ToolbarItemModel() { Command = ToolbarCommand.SourceCode },
    };
}

Step 4: Configuring code block languages

The Code Block feature supports customizable programming languages via the RichTextEditorCodeBlockSettings property, specifically the Languages and DefaultLanguage options.

@using Syncfusion.Blazor.RichTextEditor

<SfRichTextEditor ID="code-block">
    <RichTextEditorToolbarSettings Items="Tools">
    </RichTextEditorToolbarSettings>

    <RichTextEditorCodeBlockSettings
        Languages="languages"
        DefaultLanguage="javascript" />
</SfRichTextEditor>

@code {
    private List<CodeBlockLanguageModel> languages = new List<CodeBlockLanguageModel>
    {
        new CodeBlockLanguageModel { Label = "CSS",        Language = "css" },
        new CodeBlockLanguageModel { Label = "HTML",       Language = "html" },
        new CodeBlockLanguageModel { Label = "Java",       Language = "java" },
        new CodeBlockLanguageModel { Label = "JavaScript", Language = "javascript" }
    };

    private List<ToolbarItemModel> Tools = new List<ToolbarItemModel>()
    {
        new ToolbarItemModel() { Command = ToolbarCommand.CodeBlock },
        new ToolbarItemModel() { Command = ToolbarCommand.Bold },
        new ToolbarItemModel() { Command = ToolbarCommand.Italic },
        new ToolbarItemModel() { Command = ToolbarCommand.Underline },
        new ToolbarItemModel() { Command = ToolbarCommand.CreateLink },
        new ToolbarItemModel() { Command = ToolbarCommand.Image },
        new ToolbarItemModel() { Command = ToolbarCommand.CreateTable },
        new ToolbarItemModel() { Command = ToolbarCommand.Separator },
        new ToolbarItemModel() { Command = ToolbarCommand.SourceCode },
    };
}

This gives your users a friendly dropdown to pick the right language.

Code blocks in Blazor Rich Text Editor
Code blocks in Blazor Rich Text Editor

Editing tips: Make working with code blocks easier

Editing content around code blocks requires care to preserve code integrity and maintain clean formatting. Follow these tips for a smooth editing experience:

  • Adding text before a code block: Place the cursor at the start of the code block’s first line and press the Enter key to create an empty line above. Press Enter again to start a new paragraph for text.
  • Inserting text after a code block: Place the cursor at the end of the code block’s last line and press the Enter key three times to exit the code block and create a new paragraph below.
  • Using shortcuts: Press Ctrl+Shift+B (or Cmd+Shift+B on macOS) to insert a code block at the cursor’s position.
  • Preserving formatting when pasting: Paste code into a code block using Ctrl+Shift+V (or Cmd+Shift+V on macOS) to avoid external formatting (e.g., fonts or colors) and maintain the monospaced style.
  • Changing languages: When switching the language of a code block via the dropdown, verify that the code aligns with the new language for accurate syntax highlighting in the rendered output.
  • Previewing content: Since live syntax highlighting is not available during editing, use the editor’s preview mode to verify how code blocks render with syntax highlighting before publishing.

Enabling tab-based line indentation in code blocks

The code block feature also supports tab-based indentation to properly align code. This feature is crucial for languages like Python, where indentation is syntactically significant.

  • Using the Tab key: Pressing Tab within a code block typically inserts a tab character (\t) or spaces (usually four, depending on the editor’s default configuration). Use Shift+Tab to outdent selected lines.
  • Toolbar commands: Include the Indent and Outdent buttons in the toolbar to increase or decrease indentation.

Want to explore more?

For more details, you can check out:

Syncfusion Blazor components can be transformed into stunning and efficient web apps.

Wrapping up

Thanks for reading! The code block feature in Syncfusion Blazor Rich Text Editor provides everything needed for a clean and professional code presentation. It’s a reliable tool for documentation platforms, technical blogs, educational sites, and any app where readable code matters.

Try our Blazor component by downloading a free 30-day trial or from our NuGet package. Feel free to have a look at our online examples and documentation to explore other available features.

If you have any questions, please let us know in the comments section below. You can also contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories