SHUR IQ / System Explainer / March 2026
View Live Report →
IQ
How SHUR IQ Works

The Intelligence System Behind the Report

This is a guided tour of the system behind our weekly intelligence reports. What it does, how it works, what makes it different from Google Trends or custom monitoring, and where it goes next.
Right now, for the Micro-Drama weekly report, SHUR IQ is like a gifted analyst who just started the job. The analytical engine is strong: it searches across 4 languages, scores companies on 5 structural dimensions, tracks week-over-week deltas, detects gaps through knowledge graph analysis, and produces publication-ready editorial output. The ontology it runs with (which dimensions to score, how to weight them, what sources to prioritize, what thresholds define each tier) has had minimal tuning. Those are two separate things, and evaluating the system means understanding that distinction.

The engine determines how well it executes. The ontology determines how well-aimed the execution is. Right now, the engine is doing its job. The ontology needs our combined experience, domain knowledge, source-finding instincts, and editorial guidance to reach its potential. That's where the leverage is. Every hour we spend refining the scoring criteria, curating source lists, or adjusting dimension weights compounds across every future weekly run. The system absorbs that expertise permanently.
01

The Stack Ranking Engine

Explainable scoring across 5 structural dimensions

Every company in the category gets scored on 5 dimensions. Each score maps to specific evidence from that week's source corpus: app store rankings, revenue data, content volume, partnership announcements, engagement metrics.

The composite score is a weighted sum. The weights reflect which dimensions matter most for structural brand power in a given vertical. The delta column shows week-over-week movement. You can see exactly who moved, how much, and why.

# Company Composite Delta Content Narrative Distrib. Community Monetize
1ReelShort84.05▼ -0.558578887695
2DramaBox78.75▲ +1.258272876292
3Disney74.25▲ +3.255092876870
4iQiYi64.507565705060
5Netflix62.80▼ -3.002838936885
6CandyJar58.65NEW6552586555
7JioHotstar58.30▲ +5.505540825552
8GoodShort57.10▲ +2.806858574755

Notice Netflix: #5 composite, but #1 in distribution (93). Amazon: #10 composite, also massive distribution (83). Both are falling because distribution power without content commitment is a wasting asset in this category. The SBPI captures that structural weakness in a way a headline never would.

What this tells Nuri: When a client in an adjacent vertical asks "who's winning in micro-drama?" the answer is a structural map of 17 players with different strengths, different trajectories, and different vulnerabilities. That's a BI recommendation engine.

02

The Attention Stack

Every company has a shape. The shape tells you what kind of player they are.

This is the part that takes a second to click. Once it does, you won't look at competitive rankings the same way.

Five horizontal layers, one per SBPI dimension. Every company is a dot on each layer, positioned by its score (0 to 100). Dots are color-coded by tier. When you hover over a company, dashed lines connect its five dots across all dimensions, tracing a shape.

That shape is the company's structural fingerprint.

SBPI Attention Stack

Hover any dot to trace a company's shape
Strong (70+)
Emerging (55-69)
Niche (40-54)
Limited (<40)

Try hovering ReelShort. Its shape is wide and high across all five layers. That's a balanced powerhouse. Now hover Netflix. Massive distribution spike, near-zero content. That's a sleeping giant with a structural hole. Hover COL/BeLive. Almost nothing across four dimensions, then a spike to 90 on monetization. That's a pure infrastructure play.

Each shape tells a different story. A diagonal line from low-left to high-right is a company that's strong on monetization but weak on content. A flat high line is a dominant player. A spiky, uneven shape is a company with a single bet.

No other intelligence system visualizes competitive position this way. It turns a 17-row spreadsheet into a pattern you can read in two seconds.

The Key Distinction
SHUR IQ has two independent parts. The report-making logic (how well it searches, scores, writes, and visualizes) and the ontology it runs with (which dimensions, what weights, what criteria). Evaluating the system means evaluating these separately. The engine is strong. The ontology has had minimal tuning. Judge the car by how well it drives. Then judge the route separately.
03

It Learns From Its Mistakes

The CandyJar story

Week 10 ran the full pipeline for the first time. It tracked 16 companies, scored them, wrote the report. Solid output. But there was a blind spot.

CandyJar / Inkitt: $118M Raised, 70M Episodes/Month, 40 Min/Day Engagement
A top-5 player by engagement was completely absent from the Week 10 report. CandyJar (backed by Inkitt) had raised $118M, was serving 70 million episodes per month, and users were spending 40 minutes per day on the platform. The system didn't find it because its search queries weren't targeting new entrants outside the existing registry.
What happened next: We added a discovery sweep (Step 3b) to the search pipeline. 5-8 queries specifically targeting companies NOT already tracked. App store ranking lists, startup funding announcements, comparison articles. CandyJar was found, corroborated with 2 sources, scored at 58.65 (Emerging Power, #6 overall), and added to the Week 11 report.

The structural fix: The discovery sweep is now a permanent part of every weekly run. The system will never miss a CandyJar-class entrant again. That correction took minutes to implement and runs automatically from now on.

This is the difference between a static report and a system. A consulting team that misses a player has to redo their research. SHUR IQ gets a permanent upgrade to its search methodology. Every correction compounds.

Next training target: Curated source lists. Right now the system searches broadly. A Google Sheet of 50-100 vetted sources per vertical would double the signal quality. A 2x improvement from a spreadsheet update.

04

Multi-Language Search

4 language tiers, 52 sources, corroboration required

A year ago, building a multi-language competitive intelligence pipeline across Chinese, Korean, and Hindi media would have taken a team of analysts, translation services, and months of development. We added it in one session.

English
38 sources
Deadline, Variety, Sensor Tower
Chinese
8 sources
36kr, Huxiu, Jiemian News
Korean
4 sources
KOCCA, Chosun Ilbo
Hindi / India
2 sources
Economic Times, MediaNama

Chinese sources (Tier 1) run every week because that's where the market originated. Search terms include 微短剧 (micro-drama), 短剧出海 (short drama overseas expansion), and company-specific queries for parent entities like 九州文化 (Jiuzhou Culture, ReelShort's parent).

Korean and Indian sources are conditional, triggered when signals from English-language monitoring suggest activity in those markets. This week, JioHotstar's 100-microdrama commitment for IPL triggered Hindi/India searches, and the Korean "Warring States" platform launches triggered Korean searches.

The corroboration rule: No foreign-language finding enters the scoring rationale without confirmation from at least two independent sources. Translated findings are tagged [translated:zh], [translated:ko], [translated:hi] so you can see exactly what came from which market.

52
Sources This Week
Across 4 languages, deduplicated
14
Foreign Findings
8 Chinese, 4 Korean, 2 Hindi
100%
Corroborated
Two-source minimum before scoring

Each weekly run builds on the previous one. The search term registry grows. The source list improves. The translation accuracy compounds. After 10 weeks, you have a multi-language industry database that no Google Trends dashboard can replicate.

05

The Ontology Is Customizable

Different clients. Different verticals. Different scoring frameworks.

The SBPI framework you see in the micro-drama report (Content Power, Narrative Control, Distribution Power, Community Depth, Monetization Maturity) is one configuration. The system runs on whatever ontology you give it.

For a SaaS vertical, you might weight Distribution Power at 10% and add a "Platform Stickiness" dimension at 25%. For a consumer brand vertical, Community Depth might jump to 30%. For a regulated industry, you'd add a "Regulatory Positioning" dimension entirely.

The dimensions, the weights, the tier thresholds, the scoring criteria, the search terms, the company registry. All configurable per client, per vertical, per subscriber. The Totem Protocol that powers SHUR IQ treats ontology as a first-class parameter.

What this means for the business: One system serves any number of verticals. A subscriber tracking the micro-drama category and another tracking edtech startups would get structurally identical reports with completely different ontologies. The analytical engine is the same. The lens it looks through is different.

This ontology layer is the least-tuned part of the current system. The micro-drama SBPI weights have not been calibrated against historical outcomes. Once we invest in ontology design (expert interviews, backtesting against known market shifts), the accuracy jumps significantly. A content investment. The technology is already built.

06

Publication-Ready Output

The report is the product

SHUR IQ doesn't produce raw data dumps. It produces publication-grade editorial content with interactive visualizations. Thumbnail cards with og:image pulls. Ranked tables with delta tracking. Gap analysis with severity badges. Breaking news with source links and "Why It Matters" callouts.

5
Report Tabs
Overview, Rankings, Gaps, Breaking News, Methodology
17
Companies Tracked
Scored, ranked, and delta-tracked weekly
3
Output Formats
Full editorial, public authority version, social assets

This scales to 100+ verticals with the same production quality. A micro-drama report, a fintech report, an edtech report. Same editorial design system, same analytical rigor, different data. Each one looks like it was hand-produced by a team of analysts and designers.

View the live editorial report →

View the standalone Breaking News page →

07

Cumulative Value

Every week the system runs, the database gets more valuable

Week 10 established baselines for 16 companies. Week 11 added CandyJar, tracked deltas for all 17, and expanded foreign language coverage. Week 12 will add Watchlist companies (ShortTV, Holywater, DramaWave) and deepen the persistent knowledge graph.

The state file tracks per-company: composite score, previous composite, delta, tier, key signal, and signal URL. The InfraNodus persistent graph accumulates cross-week relationships. Gap objects track first-identified dates and open/closed status. All of this compounds.

After 10 weeks, you have a longitudinal database of category dynamics that doesn't exist anywhere else. After 6 months, you have a proprietary industry dataset that competitors would need to rebuild from scratch. After a year, you have an asset.

Google Trends shows you that "microdrama" searches went up 40% in February. It can't tell you that JioHotstar jumped +5.5 SBPI points because of a 100-title IPL commitment, that CandyJar appeared from nowhere at #6, or that Netflix is bleeding structural position despite having the best distribution score in the category.

08

What Google Trends Can't Do

An honest comparison

Google Trends is a useful tool. It shows search interest over time, geographic distribution, and related queries. For quick pulse checks on consumer interest, it works. But competitive intelligence requires more than search volume trends.

Google Trends + Alerts

  • Search volume trends (relative, not absolute)
  • Geographic interest distribution
  • Related queries and topics
  • Email alerts on new mentions
  • English-only signal (no foreign source integration)
  • No scoring, no rankings, no structural analysis
  • No week-over-week delta tracking
  • No gap detection or negative space analysis
  • No editorial output (raw data only)

SHUR IQ

  • 5-dimension structural scoring (SBPI) per company
  • Week-over-week delta tracking with evidence
  • 4-language source coverage (EN, ZH, KO, HI)
  • Structural gap detection via knowledge graph analysis
  • Discovery sweep for new market entrants
  • Publication-ready editorial with interactive viz
  • Customizable ontology per vertical/client
  • Cumulative industry database (compounds weekly)
  • Self-correcting (CandyJar-class miss prevention)

The comparison isn't really fair. Google Trends measures consumer attention. SHUR IQ measures structural competitive position. They answer different questions. But if the question is "what's actually happening in this category and who's winning," one of these tools can answer it and the other can't.

See It Live

The Week 11 report is published. 17 companies scored. 5 structural gaps tracked. 52 sources across 4 languages. Breaking news with thumbnails and editorial analysis.