Drag green dots on the images you most associate with "visual data". Drag red dots on the images you least associate with "visual data".
Meta-type is the signal. Positive images contain self-referential text—numbers, labels, and parameters that describe the image's own elements. The text isn't just displaying data; it's documenting the visual itself.
Absence, not presence, drives rejection. Negative images aren't rejected for what they contain (gradients, organic forms, illustration) but for what they lack: the meta-type layer. Add parameters to the tree sunburst and it might flip.
Medium doesn't matter. Both digital interfaces and print materials scored positively—as long as they displayed meta-type. The Küche packaging specimen sheet (print) scored 77% alongside digital dashboards.
Visual data signals trustworthiness. Like nutritional labels on food packaging, the meta-type aesthetic communicates that something has been measured, verified, and subjected to scrutiny—whether or not anyone reads the actual values.
Participants identify "visual data" through a specific visual signal: —self-referential text that documents the image's own construction. This includes numerical values, parameter labels, unit specifications, and dimensional annotations that describe elements within the image itself. The highest-scoring images don't just show data; they show their work.
Critically, images in the negative array aren't rejected for containing "anti-data" elements. They're rejected because they lack the meta-type layer entirely. A geometric wireframe, an illustrated wave pattern, and a gradient sphere share nothing visually—except the absence of self-documenting text. This reframes the strategic question from "what to avoid" to "what to add."
Participants aren't evaluating whether data exists behind an image—they're evaluating whether the image performs the aesthetic of measurement. Meta-type signals that something has been quantified, verified, and subjected to rigor. Its presence is the message.
Consider the nutritional information panel on a soda bottle. Almost no one reads it. But if it weren't there, far fewer people would drink the product. The panel's function isn't to communicate specific values—it's to signal that the contents have been measured, regulated, and deemed safe. Visual data works the same way. The meta-type aesthetic tells viewers: "This has been subjected to scrutiny. This is accountable. This is trustworthy." The performance of rigor produces the perception of validity.
Every image in this array contains : text that refers to elements within the image itself. The planet infographic (95.35%) displays "146.7 kPa" and "2.639 km/s"—measurements of the depicted object. The shadow parameter sheet shows shadow effects alongside their exact blur, spread, and color values. The lightning icon breakdown annotates each layer's fill, opacity, and blend mode.
This isn't just "text on images"—it's text that creates a feedback loop between visual and verbal. The image shows something; the text measures it; the measurement validates the image. This self-documenting quality is the core signal.
The array includes both digital interfaces (dashboards, phone UI, audio player) and print materials (the Küche packaging grid at 76.67%). The medium is irrelevant. What matters is the presence of meta-type—specific values, labeled dimensions, categorical annotations. A print specimen sheet with numbered elements reads as "data" just as clearly as a digital dashboard.
Compositions tend toward modular, zoned layouts—information organized into discrete regions with clear hierarchies. Dark backgrounds with high-contrast data elements dominate. The aesthetic reference points are technical documentation, design specifications, and instrument readouts. These images look like they were made to be verified.
These images share almost nothing visually. The array contains: monochromatic line art, vibrant gradient spheres, geometric wireframes, illustrative wave patterns, atmospheric photography, and 3D renders. What unites them isn't a shared aesthetic—it's the shared absence of meta-type.
None of these images contain self-referential measurement. The tree sunburst (90.24% resistance) could represent data hierarchy, but without labels or values, it could equally be pure decoration. The geometric wireframe could be a data structure visualization, but without annotations, it reads as pattern. The gradient sphere could encode information, but without parameters, it's just color.
It would be a mistake to conclude "avoid organic forms" or "avoid gradients" from this array. The tree, the waves, and the gradients aren't causing rejection—the missing meta-type is. This is actionable: these images could potentially flip to positive by adding a meta-type layer. Annotate the wireframe with vertex counts and dimensions. Label the gradient with color stops and blend modes. Add measurement and the aesthetic transforms.
Interestingly, the OKOK image (81.25% resistance) contains text—but it's descriptive marketing copy, not measurement. "Designed, manufactured, and produced for true emotional reading" makes claims but provides no values. This confirms the distinction: text alone isn't the signal; self-referential, quantified text is.
Contested images reveal where participants disagree about category boundaries. The pattern is consistent: these images contain some meta-type signals buried under competing aesthetic elements.
The AI humanoid/schematic (67.65%) includes technical drawing elements and dimensional lines—meta-type signals—but they're partially obscured by atmospheric treatment and the dominant figurative element. The planet diagram (55.56%) has numbered callouts and orbital notation, but uses a vintage-collage aesthetic that muddies the signal. The data is there; it's just not leading.
The fractal tree (31.71%) has the hierarchical structure of a data visualization but zero labeling—it could flip with annotations. The Bartledan dashboard (42.31%) shows actual readouts, but they're dwarfed by dramatic atmospheric imagery. The weather app (40%) displays real data (locations, weather conditions), but photography dominates the frame.
Contested cases suggest a ratio rule: when meta-type occupies less than ~40% of the visual hierarchy, opinion splits. The meta-type needs to be not just present, but prominent. Secondary annotation on a primarily aesthetic image doesn't reliably register as "visual data."
| Dimension | Positive (Meta-Type Present) | Negative (Meta-Type Absent) |
|---|---|---|
| Text Function | Self-referential: describes elements in the image | Absent, decorative, or purely expressive |
| Information Display | Explicit values: numbers, units, parameters | No measurable values visible |
| Spatial Organization | Modular, zoned, hierarchical | Unified, flowing, holistic |
| Visual-Verbal Relationship | Feedback loop: image ↔ measurement | Image stands alone |
| Interpretive Demand | Low: meaning labeled on the surface | High: meaning must be inferred |
| Implicit Promise | "This has been measured and verified" | "This is for aesthetic experience" |
Participants operate with an implicit framework: "visual data" = visual + documentation of that visual. The image alone isn't data; the image plus its self-referential annotation is data. This is why a chart with axis labels reads as data, while a pattern without labels doesn't—even if the pattern actually encodes information.
More fundamentally, the meta-type aesthetic signals accountability. An image with visible measurements implies someone took those measurements. Specific values imply verification is possible. The aesthetic performs the existence of a rigorous process behind the visual.
This explains why the presence of meta-type matters more than whether anyone actually reads it. The nutritional label analogy holds: the function of visual data aesthetic is to signal trustworthiness, not to communicate specific information. Participants aren't asking "what does this data say?" They're asking "has this been subjected to measurement?" The meta-type layer answers yes.
Sample: 67 completed surveys from 181 survey opens (37% completion rate). This represents a moderate sample size that supports pattern identification with reasonable confidence, though individual image scores should be treated as directional rather than precise.
Threshold: 70% agreement required for Positive/Negative classification. Images below this threshold in both directions are classified as Contested.
Confidence Level: With n=67, patterns that emerge across multiple images can be treated with reasonable confidence. The meta-type pattern appears robustly across the dataset—every positive image contains it; every negative image lacks it. Individual edge cases near the threshold should be interpreted cautiously.
Suggested Validation: The reframing (absence vs. presence) suggests a testable hypothesis: adding meta-type annotations to rejected images should shift them toward positive classification. This could be validated with a follow-up study using modified versions of negative-array images.