What this is, and what it is not
Point a phone at an HVAC data plate in a crawl space. The system reads the make, model, and serial, matches it to the manufacturer's spec, and drops the record on the job. Point a phone at a damaged section of roof. The system tags the photo with a damage category, the date, the address, and files it into a searchable archive. Your estimator pulls up every photo of granule loss across every job in two clicks.
What this is not is a photo-to-estimate machine. Insurance-grade damage classification, the kind EagleView, HOVER, and CompanyCam offer, is built on millions of labeled photos and years of model work. A boutique shop building that from scratch gets maybe 60 to 75 percent accuracy on a narrow category. That is fine for an internal assist. It is not fine for a number you send to an adjuster. When a client needs insurance-grade output, we integrate the vendor who already has it. We do not rebuild that wheel.
Where a custom model actually earns its cost
Three jobs, consistently: data-plate OCR on equipment your techs service most, photo tagging and archival against your own taxonomy, and a damage-category assist that flags what your experienced estimator would flag. On all three, the value is speed and searchability, not automated pricing.
HVAC data plates are the cleanest win. The text is standardized, the plate is usually readable, and the output feeds directly into service history and parts lookup. We routinely hit 85 percent first-pass accuracy on clean plates and 60 to 75 percent on weathered or partially obscured ones, with a confidence score that tells the tech when to reshoot.
The archive is the quiet compounding asset
Every jobsite photo your techs have ever taken becomes searchable by date, address, equipment, damage type, and any tag you define. Two years in, that archive answers questions your office used to spend an afternoon on. "Pull every photo of that unit from the last three service visits." "Show me every roof we did for this HOA in 2024." "Find the install photos for the unit we are warrantying."
The first month is where the model earns trust. We benchmark accuracy on your real photos before anyone relies on the output, publish the numbers, and tune against whatever your techs actually capture in the field.
What we install
- Data-plate OCR tuned to the equipment categories you service most
- Photo tagging and archive search by date, address, equipment, and damage type
- Damage-category assist that flags likely categories with confidence scores, for estimator review
- Mobile capture workflow for iOS and Android that your techs can use day one
- Integration with your estimating or field service software so photos land on the job record
- Fallback path to partner vendors (EagleView, HOVER, or similar) when insurance-grade output is required
What you get
- Trained and versioned models for the specific categories in scope
- Mobile capture workflow or app integration tested on your devices
- Admin dashboard showing accuracy benchmarks and volume processed
- Retraining protocol for adding equipment or categories over time
- Published baseline accuracy numbers for every category at launch
- Documentation for onboarding new techs to the photo workflow
Questions
How many photos do you need to train?
For data-plate OCR, a few hundred plates per equipment family gets us to a working model. For damage categorization assist, several hundred labeled photos per category is the floor, and accuracy climbs with volume. We pull from your existing job archives, which most contractors have and have never organized.
Why the paid discovery?
Because image recognition is the one service where we cannot scope a fair fixed price without looking at your actual photos. Two weeks of discovery gives us an honest read on training data quality, category separability, and baseline accuracy. If we come back and tell you a custom model is the wrong call and you should license a vendor instead, that is the honest answer and you have not spent build money to hear it.
Can it generate an estimate from a photo?
Not for insurance work. For internal purposes, it can produce a draft line-item sheet that an estimator reviews and corrects. The time savings come from the first pass. The judgment stays with the human.
What about bad photos, low light, wrong angle?
The model returns a confidence score. Low-confidence results prompt a reshoot rather than pushing a wrong answer downstream. We tune the confidence threshold during the first month against real field conditions.
When do you recommend partnering with EagleView or HOVER instead of building?
Any time the output has to carry insurance-grade accuracy on aerial measurements or damage classification. Those vendors have a multi-year head start on training data we cannot match as a boutique shop. We integrate their output, we do not compete with it.
Next step
Book a thirty-minute diagnostic. We look at your actual workflow and tell you whether this fits. Free. No slides.