HOS Image: Waiting as an algorithmic form: The image as a promise, 2025

 

 

 

 

 

 

Marcello Mercado

 

HOS Image: Waiting as an algorithmic form: The image as a promise

 

2025

 

 

(German)

 

(Spanish)

 

1.

At the threshold between generation and expectation, this installation presents an image that does not yet exist. A recurring system message, trapped in its own waiting state, transforms into a generative landscape. Here, the “non-event” becomes the scene: inactivity as a form of agency and error as an aesthetic form. The algorithm does not generate the image itself, but its spirit: an invisible performativity, a readymade suspended in latency.

 

Conceptual Core

In today’s world of AI-generated images, a new figure emerges: the HOS image (Held On Server). It is not an image in itself, but a provisional, suspended version of one. Its existence is defined by waiting: an initiated request, a promise of visuality not yet fulfilled. This image is stored on shared servers allocated to unpaid users and has not yet crossed the threshold of visibility. It inhabits the technical limbo of latency.

In this context, art is no longer limited to representing the visible. Rather, it begins to integrate the architectures of access, computational time, and systemic inequalities that determine whether something becomes an image at all. The HOS image is thus both an object of study and a critical gesture. It compels us to ask: What remains outside visuality due to infrastructural constraints? How does the algorithm function as an economic and aesthetic filter? What does it mean to curate what has not yet appeared?

Curating HOS images means paying attention to how latency becomes political. Instead of exhibiting finished images, one might expose dead times, unfinished processes, and moments when the eye waits but does not see. Through this shift, the exhibition becomes a choreography of suspension—an archaeology of non-rendering.

 

 

 

The prompt has been issued. Beginning of image generation.

 

 

Advanced imaging

 

 

 

Experimental Process


 

  • Prompt Input: A text prompt is entered into an AI image generation system.

  • System Response: The system displays a message indicating that the image is being processed.

  • Screenshot Capture: A screenshot of this message is taken, representing the “HOS” image.

  • Printing and Re-photographing: The screenshot is printed and then re-photographed using a simple pinhole camera made from a shoebox. This requires an exposure time of 60 to 90 minutes.

  • Final Output: The resulting photograph is printed, and the original prompt is entered into the AI system again to generate the complete image. This allows for a comparison between the expected and the realized image.

 

 

 

 

 

01. Selected image printed on paper to be photographed with a camera obscura.

 

 

02. Selected image printed on paper to be photographed with a camera obscura.

 

 

03. Selected image printed on paper to be photographed with a camera obscura.

 

 

04. Camera Obscura (pinhole camera).

 

 

05. Image 01 was created in the darkroom.

 

 

06. Image 02 was created in the darkroom.

 

07. Image 03 was created in the darkroom.

 

 

 

 

08. This algorithm simulates the different latency experienced by users based on their subscription status, highlighting the infrastructural and economic factors that influence access to AI-generated images.

 

 

 

09. Installation diagram

 

 

 

2.

Again…

At the threshold between generation and expectation, this installation shows an image that does not yet exist. A recurring system message, blocked in its own waiting state, transforms into a generative landscape. Here, the “non-event” becomes the scene: inactivity as a form of action, error as a form of aesthetics. The algorithm does not generate the image itself, but its spirit: an invisible performativity, a readymade suspended in latency.

This phenomenon can be understood as a manifestation of “differential algorithmic latency” — a system-structured mechanism that manages computational load through access hierarchies. Waiting is thus not an exception or a malfunction but a core function of the system, a strategy of commercial optimization as well as experience differentiation.

In this context, the AI-generated image is not simply a visual result but a computational instance conditioned by access logics, prioritization strategies, and usage policies. Each image triggers a consumption of computational resources and becomes an entity with computational costs, algorithmic architecture, and commercial policy embedded within it.

This work encourages reflection on how latency, waiting, and inactivity can become aesthetic and critical elements in the age of generative AI.

A. General Framework

The ChatGPT message “Image processing. As many people are currently creating images, it may take a while. We will notify you when your image is ready.” can be analyzed as an interface unit that combines technical architecture, business logic, and temporal modulation. Waiting is therefore not an exception or an error but a structural function of the system.

B. The Involved System Elements

  • Algorithmic Queues:
    The system implements a scheduled queue to manage limited computing resources. Premium users receive priority, while non-paying users are moved to a “low-priority zone” managed by a scheduler.

  • Load Balancing:
    The message signals high system load. Technically, this means compute nodes are busy and tasks are being redistributed to other nodes. However, this redistribution is not neutral but subject to access rights policies.

  • Conditioned Temporal Experience:
    The user experience is modulated according to their position in the hierarchy. Latency thus becomes not merely a technical issue but an operational mechanism of stratification.

 

C. Analytical Categories

  • Functional Latency:
    The difference between request time and delivery time.

  • Strategic Latency:
    Deliberately introduced as part of the freemium model.

  • Perceptual Latency:
    How waiting is communicated — in this case, through informal and passive language — and how user attention is managed.

 

D. Computational and Structural Implications

Waiting becomes an indirect selection operator that filters user behavior (wait, pay, or give up?). The message itself is part of an algorithmic containment system designed to regulate user anxiety about waiting through friendly phrasing, without revealing the discriminatory logic beneath.

From a platform architecture perspective, this represents a case of differentiated scaling, where access to compute-intensive resources is governed by the business model.

E. Conceptual Proposal

This phenomenon can be categorized as “Differential Algorithmic Latency” (DAL):
a system-structured waiting mechanism for managing computational load through access hierarchies. This involves not only technical capacities but also commercial optimization strategies and differentiated user experience design.

 

10. This algorithm simulates the different latency experienced by users based on their subscription status and highlights the infrastructural and economic factors that influence access to AI-generated images.

 

 

11.

 

 

 

3. Creation of a Conceptual Algorithm in Python

The next phase consists of designing a conceptual algorithm in Python that simulates this queuing structure. The algorithm does not generate images but simulates:

  • A task queue based on priority,

  • Differential latency, and

  • The system’s structured response depending on the user’s status.

This algorithm serves as a critical analytical model for image generation platforms based on freemium structures.

3.a Conceptual Algorithm: Simulation of Differential Latencies in AI Image Generation

Explanation:


This algorithm simulates a differentiated queue structure based on user type (free or paying). No actual images are generated; instead, the algorithm simulates:

  • Queue generation requests,

  • Assignment of waiting times based on user type,

  • Output of the message “Image is being processed…”, followed by a notification when the image is ready.

13. Conceptual Algorithm Objective

The aim is to represent a simulation of an imaging system with latencies, errors, priorities, and protocols — similar to the message:

«Image is being processed. Many people are currently creating images, so this process may take some time. We will notify you when your image is ready.»

A. Curating HOS Images

In today’s landscape of AI-generated images, a new figure emerges: the HOS image (Held On Server). It is not yet an image, but its suspended anticipation. Its existence is defined by delay: an initiated request, an unfulfilled promise of visuality. This image, stored on shared servers assigned to non-paying users, has not yet crossed the threshold of visibility. It lingers in the technical limbo of latency.

In this context, art is no longer limited to the representation of the visible; it begins to integrate architectures of access, computational time, and systemic inequalities that determine what can and cannot become an image. The HOS image is both an object of study and a critical gesture. It compels us to ask: What remains invisible due to infrastructural constraints? How does the algorithm function as an economic and aesthetic filter? What does it mean to curate what has not yet appeared?

Curating HOS images means attending to how latency becomes political. Rather than exhibiting finished images, one might exhibit dead times, unfinished processes, and moments when the eye waits but does not see. This shift transforms the exhibition into a choreography of suspension — an archaeology of non-representation.

A HOS image (Held On Server) is an image generated by an AI model that has not yet been fully processed or delivered to the requesting user. It remains suspended in a processing queue on shared servers, typically assigned to free or non-paying users. Technically, this state reflects a mechanism of resource management and computational prioritization. Conceptually, the HOS image embodies latent visuality — a form not yet rendered, resting within an invisible architecture, caught between request and appearance. It can also be interpreted as a symptom of an unequal algorithmic economy, in which access to visibility is mediated by payment tiers, speed, and computational privilege. The image exists as a promise, as waiting, as structured delay.

General Framework

The sentence:

«Image is being processed. Many people are currently creating images, so this process may take some time. We will notify you when your image is ready.»

can be analyzed as an interface unit that combines technical architecture, business logic, and user time modulation. Waiting is not a failure or an exception: it is a structural function of the system.

Waiting as Algorithmic Form

Rather than understanding waiting merely as a technical delay, it can be seen as an algorithmic form predetermined by access conditions, algorithmic priorities, server architecture, and business models. In the free version of ChatGPT, waiting becomes a kind of computational class threshold: those who do not pay must wait. This relates to the following ideas:

  • Critical Infrastructure Theory (Lisa Parks, Tung-Hui Hu):
    Investigating the invisible layers of digital processing and how they mediate user experience.

  • Latency Theory:
    Latency not only as technical delay but as a political construction of access time.

  • Media Theory (Wendy Hui Kyong Chun):
    Processing times as political relationships, not neutral conditions.

 

 

Critical / Metatheoretical Definitions

  • 46. Image as Visible Ideology:
    Encompasses technical decisions that reflect values, exclusions, and biases.

  • 47. Image as Object of Critical Speculation:
    Allows questioning the boundaries between authorship, automation, and culture.

  • 48. Image as Form of Accelerated Abstraction:
    Produced without consciousness, body, or affect but according to visual logic.

  • 49. Image as Reification of Contexts:
    Makes visible regularities without deeper semantic content.

  • 50. Image as Distorted Mirror of Desire:
    Does not return what is desired but what the system infers as desirable.

 


The 50 definitions of the image in the context of generative AI demonstrate that we can no longer understand the image as a passive unit of perception or as a stable representation of reality. Instead, it emerges as a technical, operational, and speculative entity. Every generated image manifests not only a process of statistical inference but is also marked by economies of waiting, hidden infrastructures, algorithmic decisions, and a genealogy of visual techniques.

The act of “waiting for an image” is itself a critical experience, as it makes us aware of time, privilege, technical architecture, and the transformation of the image into a conditional flow. From this perspective, the image no longer represents; it distributes, prioritizes, and computes. AI does not produce images of the world but of the system that produces them.

 

 

Practical Applications of the Definitions

Critical Curation of Generative Images
These definitions enable the development of curatorial criteria for exhibitions working with AI-generated images, focusing not on the “content” of the image but on its technical, political, temporal, or epistemological condition.

Interface Design
Developers can incorporate these categories to create interfaces that present waiting times, latencies, or errors as part of the aesthetic experience, rather than concealing them.

Dataset Analysis
Use these categories to classify training images and understand how aesthetic or ideological categories are distributed across datasets.

Critical Digital Media Pedagogy
The definitions can serve as a foundation for courses in art, design, philosophy of technology, or visual studies seeking to problematize the image beyond its form.

Institutional Critique of Generative Models
These definitions can inform institutional policies on algorithmic transparency and visual ethics, suggesting criteria for evaluating generative models in terms of bias, accessibility, or economic structure.

Speculative AI Architectures
System architects or artists can use these definitions to simulate alternative AI models where time, waiting, or error are intentional and significant elements of visual production.

For Example: 5. Dataset Analysis — Rethinking the Image as a Distribution of Latent Biases and Structures

Core Concept

By redefining the image as a technical-algorithmic entity within a generative AI system, we acknowledge the need to address not only the visible result but also the latent conditions that make it possible. The training dataset is no longer just a simple collection of images; it becomes a powerful visual structure where biases, repetitions, aesthetic hierarchies, omissions, and privileges are manifested.

Analyzing a dataset therefore involves not only inspecting its images but also mapping the patterns of visibility and exclusion it generates.

Practical Application

Using the 50 preceding definitions (especially the ontological, epistemological, and political ones), a taxonomic analysis model of the dataset can be developed:

  • Classify images by lighting patterns, facial proportions, or predominant artistic style.

  • Measure the geographic or ethnic distribution of faces.

  • Identify formal redundancies that favor “neutral” or dominant styles.

  • Compare the number of images with neutral backgrounds versus those with environmental context.

  • Evaluate implicit taxonomies: What types of bodies, spaces, or gestures are overrepresented or absent?

This analysis can inform critical visual tools or interfaces, such as:

  • Visual Bias Maps: Showing the density of certain image types.

  • Latency Visualizations: Indicating which types of images are generated faster or with greater confidence.

  • Dialogical Interfaces: Confronting users with the genealogy of a generated image, showing its training “ancestors.”

6. Examples of Works, Installations, and Interfaces

A. The Training Room (interactive installation)
An immersive space simulating the interior of a dataset. Images are projected as a floating cloud, and each visitor can “select” an image to reveal 1,000 similar images. Each selection displays metrics such as “frequency,” “stylistic repetition,” “predominant race,” “geographic origin,” and “level of detail.”
Inspired by AI training environments but visually exposing invisible structures.

B. Latent Discriminator (critical web interface)
A website where users generate any AI image, after which the interface returns a detailed analysis of which dataset subsets most influenced the image, including percentages, styles, semantic classes, detected biases, and omissions.
Both educational and critical, it makes clear that a generated image does not emerge from nowhere but from a deeply structured and imbalanced field.

C. The Delay Mirror (algorithmic performance and installation)
A camera captures the visitor’s face, and AI attempts to generate their portrait. The system intentionally introduces variable delays based on dataset statistics: the less frequent the visitor’s facial style is in the dataset, the longer the image takes to generate.
A real-time graph shows the system’s “familiarity” with the visitor’s face — a critique of unequal representation and visibility privilege: not all bodies are read with the same fluency.

D. Genealogy of an Image (advanced visualization)
When generating an image with AI, the system not only presents the final image but also displays a genealogy of its training data: key images, dominant styles, and visible/invisible authors.
Each image is accompanied by a “generative transparency” score and a “complexity of inheritance” score, allowing visualization of the debt each generated image owes to its sources.

E. Latency Atlas (critical cartography of visual generation)
A large screen displays a world map divided into cultural, ethnographic, and aesthetic zones. For each region, the average time an AI system takes to generate images associated with that culture or style is measured and displayed.
A bar graph shows which zones are “generated” the fastest.
A direct exposure of algorithmic inequality: speed itself reveals bias.

 

7. Research References

  • Yuk HuiThe Question Concerning Technology in China: exploring the intersections between technology and cultural specificity.

  • Benjamin BrattonThe Stack: describing how platforms reorganize sovereignty, governance, and time.

  • Mark HansenFeed-Forward: On the Future of 21st-Century Media: examining how contemporary media preprocess the future.

 


 

13. Conceptual algorithm: The aim is to represent a simulation of an imaging system with latencies, errors, priorities, and protocols — similar to the message “Image is being processed. Many people are currently creating images, so this process may take some time. We will notify you when your image is ready.”