TL;DR
A quick, Opal-hosted prototype that compares a job post and your resume, then gives an alignment score, missing skills, and targeted edits.

If you have ever applied to a role and thought, “My experience fits, but my resume is not telling the story the way the job post expects,” this mini-app is for you.

In this walkthrough, we will build a self-service ATS verifier using Google Opal. You upload two documents: the job post and your resume. The app returns three things you can act on immediately: an alignment score, concrete recommendations to improve your resume, and a short list of critical missing skills to decide whether to invest more time.

What Opal is (and why it is perfect for prototypes)

Opal is a Google Labs tool for building “mini-apps” that chain prompts, model calls, and tools together using a visual editor. You can describe what you want in plain language, then tweak the workflow one step at a time in the graph view.

·        No code. You build with natural language plus a visual editor.

·        Instant hosting. Opal publishes a shareable app link without you managing servers.

·        Fast iteration. When the output is off, you edit a prompt, re-run a step, and keep moving.

One important nuance: Opal is built to host the mini-app for you. That is the feature. If you truly need to run this within your own network or VPC, Opal remains a great “prototype factory,” but you will re-implement the final version elsewhere.

Prerequisites

Before you build, make sure you have the following ready:

·        A Google account and access to Opal in your country (the editor is optimized for desktop).

·        A modern desktop browser.

·        Two documents you can upload: a job post and a resume.

·        A quick decision on privacy: do not upload confidential resumes or internal job posts you would not want stored in Drive.

Data handling matters. Opal’s FAQ notes that Google does not use your Opal prompts or generated output to train its generative AI models, and also points out that Opals live as files in your Google Drive. Treat that like any other cloud document and set sharing permissions accordingly.

The app we are building

At a high level, the workflow is simple:

·        User Input: upload job post document

·        User Input: upload resume document

·        Generate: compare and score alignment, then produce structured findings

·        Output: render a results page that is easy to skim

Workflow graph in Opal: two uploads → analysis → results page.

1) Create your two upload steps

In Opal’s editor, add two User Input steps. Name them clearly so you can reference them later in prompts.

·        Job Post Document: prompt the user to upload the job posting text.

·        Resume Document: prompt the user to upload their resume.

In your User Input step settings, limit accepted file types to the formats you want to support. In my build, I allowed .txt, .docx, and .pdf.

Minimal build: two input steps feeding a single analysis and output.

2) Add the analysis step (Generate)

Next, add a Generate step called something like “Analyze Documents for ATS Verification.” This is where most of the quality comes from. The goal is to force structure so the output is consistent and easy to render.

Use your original prompt as the baseline. Easy as Copy & Paste.

Create an ATS Job post verification app. The user will upload 2 documents. The first document is the job post they are applying for. The second document is their resume. The ATS app will compare both documents and align how well the resume meets the job post requirements and then provide recommendations to enhance the structure, outline and material in the resume. If there are obvious extreme skill gaps then the app will identify the missing skills and provide a list of recommended skills to achieve before re-evalutating. Accepted formats should be '.txt', '.docx', and '.pdf'.

Then tighten it so Opal produces a predictable response.

Here is a stronger version that works well for dashboards and webpages:

You are an ATS alignment analyst.

Inputs:
- Job post: @Job Post Document
- Resume: @Resume Document

Return a single JSON object with these keys only:
{
  "alignment_score": 0-100,
  "match_level": "Strong Match" | "Mixed" | "Weak",
  "role_summary": "1-2 sentences: what this role is really asking for",
  "top_matches": ["5-8 bullets"],
  "recommendations": ["6-10 bullets that are specific and actionable"],
  "critical_missing_skills": ["0-8 items; only include items that are clearly required by the job post"],
  "suggested_resume_edits": {
    "summary": ["2-4 bullets"],
    "core_skills": ["5-12 keywords/phrases to add or reorder"],
    "experience": ["3-6 bullets: how to rewrite or add evidence"],
    "projects": ["optional; bullets"],
    "certifications": ["optional; bullets"]
  },
  "keywords_to_include": ["10-20 ATS keywords found in job post"],
  "confidence_notes": ["call out assumptions or ambiguous areas"]
}

Rules:
- Base findings on the text in the job post and resume. Do not invent experience.
- If a requirement is missing, say so plainly.
- Prefer concrete language: tech names, scope, outcomes, and ownership.

3) Render the results as a mini web page (Output)

Add an Output step and select a webpage-style output. Give the model the structure you want: a score at the top, two columns below, and a clear list of missing skills.

Take the JSON output from @Analyze Documents for ATS Verification and render a clean, scannable results page.

Layout:
- Header: "ATS Analysis Results"
- Big score badge: alignment_score + match_level
- Left panel: Recommendations for Improvement (bulleted)
- Right panel: Critical Missing Skills (chips or list)
- Footer: a short “Priority focus” note based on critical_missing_skills

Keep text concise. Use headings and whitespace. Avoid long paragraphs.

Example results screen: alignment score, recommendations, and missing skills.

4) Test, iterate, and debug like an operator

Opal gives you a Preview pane and a Console. Use them. Run each step independently until your output is stable.

·        If the score feels random, add a scoring rubric (must-have requirements are worth more than nice-to-haves).

·        If it invents experience, harden the rule: “Do not claim a tool unless it appears in the resume.”

·        If it misses keywords, force it to list the exact phrases pulled from the job post.

·        If the output page feels messy, shorten each field and increase structure (more lists, fewer paragraphs).

Common tweaks that make this app feel ‘real’

·        Require evidence: for each recommendation, point to the resume section it affects (Summary, Skills, Experience).

·        Split missing skills: “Must-have” vs “Nice-to-have” when the job post is clear enough.

·        Add a “Rewrite suggestions” section that outputs 2-3 resume bullets in the style of the job post.

·        Keep a short “confidence notes” area so the user knows what the model is guessing.

Sharing and access control

When your Opal is ready, publish it and share the app link. By default, Opals are private, and you can choose whether others can access the editor view and remix your workflow. Treat this like sharing a Google Doc: if the prompts include sensitive logic, do not grant editor access.

Limitations and a clean path to production

Opal is perfect for proving the workflow and getting the UI right. It is not the place I would store sensitive HR data long-term. If you want a production-grade version, keep Opal as your prototype, then rebuild the workflow in a small web app where you control storage, auth, and logging.

·        Prototype in Opal: validate the prompt chain, output format, and user experience.

·        Productize elsewhere: wrap the same logic behind auth, add audit logging, and store uploads securely.

·        Treat the model as a drafting engine, not a source of truth. Always verify before submitting a resume.

Verification checklist

Opal evolves fast. Before you publish your tool publicly, validate these items in your own environment:

·        Supported upload formats for User Input in your Opal instance (TXT, DOCX, PDF).

·        Maximum file size and page limits for your use case.

·        Whether your Output step supports a “webpage” layout in the way you expect.

·        Your sharing settings: app-only vs editor view and remix.

·        Data handling expectations based on your org’s policies.

Conclusion: The final

When you’re done, you won’t just have a demo. You’ll have a functional tool you can use weekly to get closer to your dream role: validate fit, tighten your resume, and see what skills are truly missing before you apply.

Now treat it like CI/CD.

Build the workflow, test and tune the outputs, then deploy improvements back into your resume and learning backlog. Each run is a new release. Each release improves alignment. Over time, you’re not just applying to roles, you’re converging on them.

Keep reading