Back to WorkAI Product

CRAM: Buyer Research to Content Engine

A production web app returning ranked, confidence-scored content recommendations from buyer journey inputs. Deployed on Azure, powered by OpenAI.

Deployed on AzureOpenAI-PoweredConfidence-Scored Output
The Context

The situation

I had just finished building a deep buyer intelligence system for a Fortune 10 healthcare client: 150+ research sources, a segmentation model across core enterprise buyer types, a Four Engines decision framework, and a 76-slide persona library designed for sales enablement.

The research was strong. The problem was usability.

Once that level of work gets packaged into slides, it often stops being operational. People know it exists, but they don't use it consistently in the flow of daily work. A demand gen lead planning a campaign may start from scratch instead of mining existing buyer insight. A salesperson preparing for a meeting may rely on instinct instead of using the research infrastructure already available to them.

In other words, the organization had high-value intelligence, but no lightweight system for activating it repeatedly. I built CRAM to solve that problem.

My Role

What I owned

I designed the product, wrote the prompt architecture, and built the application myself.

This included defining the input model, structuring the outputs, building the UI, and integrating the AI layer through the OpenAI API. I also made the deployment decisions to ensure the tool fit within the client's technical environment, including deployment on Azure Static Apps to align with the Microsoft ecosystem the client already operated in.

This was not a throwaway prototype. It was designed as an internal production application capable of supporting real content planning and sales preparation workflows.

The Approach

How I built it

The core idea behind CRAM was simple: instead of forcing users to dig through research decks, the product should translate buyer intelligence into actionable recommendations based on a few structured inputs.

I designed CRAM as a buyer-journey content recommender. The input layer captures the variables that shape what kind of content a team should create or use: the initiative being supported, the job the content needs to accomplish, the relevant buying engines involved, strategic priority, audience type, and trigger context.

From there, the system generates a ranked set of content recommendations rather than a generic brainstorm. Each recommendation is tied back to the underlying buyer logic, which keeps the outputs grounded in the strategic framework rather than making them feel like disconnected AI suggestions.

The outputs include ranked asset recommendations, confidence scoring, channel and format alignment, reasoning mapped to buying-engine priorities, and a structured outline for the top recommended piece of content. I also built in proof requirements and objection-handling logic so the tool could help teams think beyond just "what should we make?" and move into "what must this asset prove to move a buyer forward?"

A major part of the value came from the prompt architecture. I structured the system so outputs were specific enough to be useful, consistent enough to trust, and fast enough to support scenario planning. That matters in a workflow context. A strategist should be able to test several angles in minutes, not spend half a day trying to assemble one brief.

I also designed the product with future expansion in mind. CRAM Sales, which is currently in development, uses the same underlying logic to generate tailored presentation decks for specific buyers and deal stages.

The Build

Under the hood

CRAM was built with plain HTML, CSS, and JavaScript, then deployed on Azure Static Apps.

That choice was deliberate. The client operates in a Microsoft-centered environment, so the deployment needed to fit the ecosystem cleanly without introducing unnecessary infrastructure or operational friction.

The AI layer runs through the OpenAI API. I structured the prompts to return predictable, JSON-shaped outputs that the interface could render reliably. That decision mattered because the tool needed to feel like a usable product, not a chat experiment. Consistency in structure made it possible to present the results in a clear, repeatable way.

The interface uses a split-panel layout so users can keep inputs visible while reviewing outputs. That sounds simple, but it matters when people are iterating on scenarios. Change a variable, regenerate the recommendations, compare the difference, and keep moving without losing context. The UI was designed to support that workflow directly.

At a product level, the build reflects a principle I care about a lot: AI tools are only useful when the surrounding structure is strong. Good prompts help, but good product design is what makes them repeatable.

The Results

What it delivered

CRAM moved beyond the proof-of-concept stage into active internal workflow consideration at the client.

The demand generation and sales enablement teams began evaluating how to integrate it more formally into campaign planning and sales preparation processes, using it as a bridge between static buyer research and day-to-day execution.

Just as importantly, CRAM validated the value of the underlying research system. The buyer intelligence infrastructure behind it — the Four Engines Framework, segment analysis, and persona library — was already being used in company-wide sales training. CRAM extended that value by making the same intelligence easier to activate in real work.

The product remains internal and is not publicly accessible, but it demonstrates a capability I care deeply about: taking dense strategic research and turning it into a system people can actually use.

This product is currently internal and not publicly accessible.
Next Project

Mapping the Healthcare Buying Committee