-1
archive,category,category-a-i,category-130,stockholm-core-1.0.8,select-theme-ver-5.1.5,ajax_fade,page_not_loaded,vertical_menu_enabled,paspartu_enabled,side_area_uncovered,wpb-js-composer js-comp-ver-6.9.0,vc_responsive

HOS Image: Waiting as an algorithmic form: The image as a promise

 

/German)

/Spanish)

 

 

Marcello Mercado

HOS Image: Waiting as an algorithmic form: The image as a promise

 

 

 

 

On the threshold between generation and waiting, this installation presents an image that has not yet occurred. A system message, repeated and trapped in its own state of waiting, transforms into a generative landscape. Here the «non-event» becomes the scene: inaction as a form of agency, error as a form of aesthetics. The algorithm does not generate the image, but its ghost: an invisible performativity, a readymade suspended in latency.

.

 

This phenomenon can be understood as a manifestation of «differential algorithmic latency,» a mechanism structured by the system to manage computational load through access hierarchies. Waiting is not an error or an exception: it is a function of the system, a strategy of commercial optimization and differentiation of experience.

.

 

In this context, the image generated by artificial intelligence is no longer simply a visual result, but a computational instance conditioned by access logics, prioritization strategies, and usage policies. Each image activates a consumption of computational resources and becomes an entity with computational costs, with implications for algorithmic architecture and commercial policy.

 

This work invites reflection on how latency, waiting, and inaction can become aesthetic and critical elements in the age of generative artificial intelligence.

.

 

 

 

1. HOS Image: Waiting as an algorithmic form: The image as a promise:

 

 

A HOS (Held On Server) image is an image generated by an artificial intelligence model that has not yet been fully processed or delivered to the requesting user. It remains suspended in a queue within shared servers, typically assigned to non-paying or free users. Technically, this situation reflects a mechanism of resource management and computational prioritization.

 

Conceptually, the HOS image embodies a latent visuality, a not-yet-deployed form inhabiting an invisible architecture, in transit between the request and its visual appearance. It can also be understood as a symptom of an unequal algorithmic economy in which access to the visible is mediated by hierarchies of payment, speed, and computational privilege. The image exists as a promise, as a wait, as a structured delay.

 

 

A. General Framework:

The ChatGPT phrase «Image processing. Many people are creating images right now, so this may take a while. We’ll notify you when your image is ready« can be analyzed as an interface unit that condenses a technical architecture, a business logic, and a time-of-use modulation. Waiting is not an error or an exception: it is a function of the system.

 

B. System elements involved:

1. Algorithmic Queuing: The system implements a scheduled wait time to manage limited computing resources. Premium users are prioritized, while non-paying users are moved to a «low-priority zone» managed by a scheduler.

2. Load balancing: The message indicates high load. Technically, it implies saturated compute nodes and the redistribution of tasks to others. But this redistribution is not neutral: it obeys access rights policies.

3. Conditioned Temporal Experience: Users experience is modulated according to their place in a hierarchy. Latency is no longer merely technical: it becomes an operational stratification mechanism.

 

C. Categories of Analysis:

1. Functional Latency: The difference between request time and delivery time.

2. Strategic Latency: deliberately introduced as part of the freemium model.

3. Perceptual Latency: how waiting is communicated (in this case using informal and passive language) and how attention is managed.

 

D. Computational and structural implications:

Waiting becomes an indirect selection operator that filters user behavior (wait? pay? abandon?).

The message is part of an algorithmic containment system designed to regulate the anxiety of waiting with a friendly phrase, while keeping its discriminatory logic intact.

From a platform architecture perspective, this is a case of differentiated scaling, where access to computeintensive resources is regulated by the business model.

 

E. Conceptual Proposal:

This phenomenon can be categorized under the concept of Differential Algorithmic Latency (DAL):

It is a queuing mechanism structured by the system to manage the computational load through access hierarchies. It responds not only to technical capabilities, but also to business optimization strategies and experience differentiation.

 

 

 

2. The Image as a Differential Product in Generative Systems:  

     Latency, Business Optimization and Access Hierarchy

 

 

A. INTRODUCTION

In the context of generative artificial intelligence, the image is no longer simply a visual result, but a computational instance processed in a distributed architecture conditioned by access logic, prioritization strategies, and usage policies. This essay begins with a technical and structural analysis of the message:

«Image processing. Many people are creating images right now, so this process may take a while. We will notify you when your image is ready.»

To break down its implications into two dimensions:

(1) the redefinition of «image» as a generative entity within an economy of latency and optimization,

And second, the algorithmic design of a conceptual simulation in Python that represents this differential waiting as a structural function.

 

 

B. What is an image in generative AI?

1.1. Image as computational output (output artifact).

In generative systems, such as DALL-E or Stable Diffusion, the image is not a pre-existing object, but an artifact resulting from a series of statistical computations, latent space sampling, and pixel generation from trained models. Technically, it can be defined as

Image (AI-G): a data structure consisting of a multi-channel array (RGB matrix) derived from a probabilistic inference on a trained model, conditioned by a textual prompt and generation parameters.

1.2. Image as differentiated consumption vector.

The same image, generated with different access levels (free or premium user), presents different latency, resolution or quality conditions. Therefore, the generated image is not universal; its shape depends on the business model.

Differential image: A computational product whose availability, resolution and generation speed are modulated by the privilege level assigned to the user.

1.3. Image as a node within an economic infrastructure.

Each generated image triggers the consumption of GPU, memory and computing time. On a freemium platform, this translates into a real monetary cost for the provider. The platform manages these costs based on

Rate limiting models.

Priority queuing.

User segmentation logic (free, professional, API).

The image is therefore not a final result, but an entity with a computational cost, with implications for algorithmic architecture and commercial policy.

 

 

C. Technical analysis of the «Processing image…» message.

The message contains four layers of structural information:

Layer                                                                                      Content                                                                                           Function

1. Status:                                                                      «Processing image» …                                                      Informs that the task was received and is in progress

2. Load:                                                          «There are many people creating images.»                             indicates system saturation and load distribution.

3. Warning:                                                                  «This may take a while».                                                Enter latency as an expectation

4. Delayed response:                                                   «We’ll let you know…»                                                    Shifts attention to the future and manages user anxiety.

 

This set of elements constitutes an algorithmic cognitive containment strategy designed to maintain user engagement without revealing the exact logic of the queuing system or access hierarchies.

.

 

 

3. Building a Conceptual Algorithm in Python

 

The next phase is to design a Python algorithm that simulates this queuing structure. The algorithm does not generate images, it simulates:

Task queuing based on priority.

Differential latency.

The structured response of the system based on user conditions.

This algorithm will serve as a critical analysis model for image generation platforms based on freemium models.

 

 

3.a Conceptual Algorithm: Differential Latency Simulation in AI Image Generation

(Spanish version / English version below)

 

Explanation:

This algorithm simulates a differentiated queue structure based on user type (free or paid). It does not generate real images, but rather:

1. Generates queues of requests.

2. Assigns wait times based on user type.

3. Prints the message «Processing image…» followed by a notification when the image is ready.

 

 

Conceptual algorithm: Its goal is to represent a simulation of an imaging system with latencies, errors, priorities, and logs: similar to the message «Processing image. Many people are creating images right now, so this process may take a while. We will notify you when your image is ready:

 

 

 

 

 

 

3b. Curating HOS Images

 

In the contemporary universe of AI-generated images, a new figure is emerging: the HOS (Held On Server) image. It is not an image per se, but rather its suspended prefiguration. Its existence is marked by waiting: an initiated request, a promise of visuality not yet realized. This image, held on shared servers assigned to non-paying users, has not reached the threshold of the visible. It inhabits the technical limbo of the latent.

 

In this context, art is no longer limited to representing what appears, but begins to incorporate the structures of access, computation times, and systemic inequalities that shape what can and cannot become an image. The HOS image is both an object of study and a critical gesture. It forces us to ask: what is left out of visuality for infrastructural reasons? How does the algorithm work as an economic and aesthetic filter? What does it mean to curate what has not yet appeared?

 

Curating the HOS images involves attending to the ways in which latency becomes political. Instead of exhibiting finished images, one can think of exposing dead times, unfinished processes, moments when the eye waits but does not see. This displacement transforms the exhibition into a choreography of the suspended, an archaeology of non-rendering.

 

 

 

 

4. Waiting as an Algorithmic Form

 

Rather than understanding waiting solely as a technical delay, it can be thought of as a computational form prefigured by access conditions, algorithmic priorities, server architecture, and business models. In the non-paying version of ChatGPT, waiting becomes a kind of computational class threshold: if you don’t pay, you wait. This is related to ideas from

A. Critical Infrastructure Theory (Lisa Parks, Tung-Hui Hu): which examines the invisible layers of digital processing and how they mediate user experience.

B. Latency Theory: latency not only as a technical delay, but as a political construction of access time.

C. Media Theory (Wendy Hui Kyong Chun): processing times are not neutral, but political forms of relation.

 

 

5. Collective power of processing

 

The message «many people are creating images» introduces the idea of an invisible crowd, a phantom community, producing simultaneously. Here connections are made with

A. Structured simultaneity: a form of contactless shared temporal experience that could be compared to networks like Uber or Amazon Mechanical Turk.

B. Algorithmic queues: as spaces of invisible negotiation of desire, where each user is an instance waiting its turn in an opaque architecture.

 

What is an «image» in the context of generative AI?

In the context of generative AI, an «image« is no longer exclusively a visual representation produced by optical or manual means. Instead, it is transformed into a computational instance generated from statistically trained models that interpret latent vectors, textual prompts, and loss functions. The image no longer represents an objective reality or a symbolic subjectivity, but becomes an output surface optimized by algorithmic architectures, computational efficiency criteria, and training patterns.

An AI-generated image is not a stable object, but a deferred process, subject to logics of waiting, commercial priorities, staggered access, and asynchronous processing. The message «Processing image. Many people are creating images right now…» reveals that the image is also an experience of latency, competition for resources, and algorithmic regulation.

 

 

 

Classification of definitions:

 

01. Techniques

02. Epistemological

03. Ontological

04. Temporal

05. Phenomenological / Perceptual

06. Political / Economic

07. Mistakes / Failures / Mutations

08. Archival / Serial

09. Archaeological / HistoricalEvolutionary

 

 

 

1-5. Techniques

01. Image as output vector: set of numerical values in a multidimensional latent space transformed into pixels.

02. Image as functional convergence: Result of a minimally satisfied objective function during training.

03. Image as loss residual: surface that minimizes the distance between the prediction and the data set.

04. Image as weight transfer: updated state of millions of parameters after backpropagation.

05. Image as digital transduction: interpretation of a textual prompt using convolution and attention layers.

 

6-10. Epistemological

06. Image as an act of automated inference: what the machine deduces from a description.

07. Image as the result of a probabilistic epistemology: it represents the most likely, not the true.

08. Image as a cognitive shift: it suggests knowledge without direct human origin.

09. Image as predictive artifact: a proposition of what should or could be.

10. Image as crystallization of learned biases: visual materialization of a database.

 

 

11-15. Ontological

11. Image as non-object: it does not exist without a network, a prompt, and a generating instance.

12. Image as computational event: an action that occurs and disappears if not stored.

13. Image as interpolation surface: between n latent points without a fixed center.

14. Image as model interface: visible only as a translation of internal processes.

15. Image as threshold: boundary between code and representation.

 

 

16-20. Temporal

16. Image as waiting: what appears after an algorithmically controlled delay.

17. Image as asynchronous process: it does not respond to human time, but to the flow of demand.

18. Image as perceptual latency: a state of delayed immediacy.

19. Image as an ephemeral state in the execution queue.

20. Image as a state of postponement: always potential, never immediate.

 

 

21-25. Phenomenological/Perceptual

21. Image as an experience of delayed gratification.

22. Image as a projection of human expectation onto an opaque network.

23. Image as algorithmic representation with the appearance of creation.

24. Image as perceptual deception: it appears human but is machine-like.

25. Image as a surface for the attribution of meaning.

 

 

26-30. Politics / Economy

26. Image as a unit of value generated under payment priorities.

27. Image as a product conditioned by levels of access.

28. Image as a computational privilege.

29. Image as the result of incentive architecture.

30. Image as an algorithmic filter that decides who sees what and when.

 

 

3135. Mistakes / Failures / Mutations

31. Image as glitch: a revealing error in the system.

32. Image as unwanted mutation of a prompt.

33. Image as productive failure: it doesn’t fulfill expectations, but generates interpretations.

34. Image as the residue of incompatible codes.

35. Image as an asymmetry between human desire and network output.

 

3640. Archive / Series

36. Image as one instance among millions: a node in an infinite series.

37. Image as copy without original: each generation is first and last.

38. Image as a visual record of an interaction.

39. Image as digital trace, not preserved.

40. Image as a speculative archive: its value lies in possibility, not stability.

 

4145. Archaeological / Historical-Evolutionary

41. Image as a continuation of the automated pixel of video games.

42. Image as heir to the algorithmic art of the 1960s.

43. Image as a mutation of GANs and their evolution into Transformers.

44. Image as a consequence of the computational dream of the perfect image.

45. Image as the current stage of a long history of visual automatisms.

 

 

1. Technical Definitions

01. Image as a data vector: An image is a matrix of numerical values representing visual information that can be processed by a generative model.

02. Image as computational output: The result of an inference performed by a neural network based on textual or latent input.

03. Image as optimized file: An image is a structure compressed and transformed according to efficiency parameters (weight, format, resolution).

04. Image as rendering instance: Represents a frame generated by layers of graphical processes controlled by stochastic parameters.

05. Image as minimized loss function: The visual manifestation that emerges when a model manages to minimize the error function with respect to the given challenge.

 

 

2. Epistemological definitions

06. Image as a visual hypothesis: A probabilistic conjecture that the model proposes as a valid representation of the input text or context.

07. Image as statistical knowledge: Represents a point of convergence among thousands of examples seen during training, with no direct reference.

08. Image as synthesis of correlations: It is the result of the superposition of co-occurring patterns in previous data sets.

09. mage as operational interpretation: It is a reading that the system makes of a human instruction in terms of vectors and weights.

10. Image as the result of implicit reasoning: It does not illustrate a truth, but rather a probable calculation generated by hidden relationships in the model.

 

 

3. Ontological definitions

11. Image without a referent: The generated image does not represent an existing object, but rather a formal possibility.

12. Image as operational fiction: Its existence depends on the execution of an algorithmic process and not on an empirical world.

13. Image as technical object: It has its own existence as a processed, recorded and stored entity.

14. Image as synthetic appearance: It does not arise from a physical phenomenon, but from a network of numerical transformations.

15. Image as an unstable double: It is a projection that does not refer to a thing, but rather to a set of statistical conditions.

 

 

4. Temporal definitions

16. Image as latency: An entity that does not yet exist, but whose process has begun, awaiting computation.

17. Image as process duration: Its existence is measured by the time it takes to be computed and displayed.

18. Image as a product of waiting: It is linked to a social time shared by thousands of simultaneous users.

19. Image as deferred event: Its appearance depends on a priority queue managed by servers.

20. Image as operational suspension: It is characterized by a threshold between input and visual response.

 

5. Functional definitions

21. Image as Visual Response: It is the functional translation of a textual instruction or prompt.

22. Image as a unit of satisfaction: It is measured on the basis of its usefulness or congruence with the user’s desire.

23. Image as a validation interface: It allows for verification of the functioning of the model or the clarity of the prompt.

24. Image as a training test: It is used to evaluate the effectiveness or biases of the system.

25. Image as Reproducible Output: It can be regenerated, adapted or transformed under new conditions.

 

 

6. Aesthetic Definitions

26. Image as an emerging style: Acquires new visual characteristics by combining diverse training.

27. Image as automatic pastiche: Recombines formal features from thousands of styles and artists without awareness of authorship.

28. Image as Perceptual Coherence: Evaluates an image based on whether it appears «plausible» or «aesthetically complete.

29. Image as Statistical Visual Pattern: Its appearance is guided by regularities in the data set.

30. Image as object without aura: It has no original or context of human production.

 

7. Political definitions

31. Image as infrastructure product: Dependent on global computing resources and architectural decisions.

32. Image as algorithmic curatorial decision: What appears is filtered through the prioritization, censorship, and policy mechanisms of the model.

33. Image as conditional access: It is determined by the level of subscription and permitted use of the system.

34. Image as Digital Privilege Trace: Its resolution and speed of delivery vary according to socio-economic conditions.

35. Image as a result of opaque governance: The user neither controls nor knows the exact criteria for its generation.

 

8. Economic definitions

36. Image as a differentiating good: It is produced as part of a strategy of exclusivity or experience customization.

37. Image as a monetization node: It can be transformed into an NFT, a commercial product or viral content.

38. Image as a by-product of a SaaS model: It is part of the value proposition that justifies a subscription model.

39. Image as a return on data investment: It is generated from years of training on massive data sets.

40. Image as an aesthetic trademark of the vendor: It implies a style, speed, or aesthetic specific to the generating system.

 

 

9. Archaeological and Evolutionary Definitions

41. Image as the current version of a genealogy: It is the heir of previous visual practices (collage, rendering, CGI).

42. Image as historical accumulation: It is loaded with layers of data from different eras and styles.

43. Image as a technical threshold: It marks a turning point in the evolution of computational imagery.

44. Image as synthetic residue: It accumulates as part of the growing archive of generated output.

45. Image as future archaeological evidence: It could be studied as a cultural trace of generative AI in our time.

 

 

10. Critical / Meta-theoretical Definitions

 

46. Image as visible ideology: It involves technical choices that reflect values, exclusions, and biases.

47. Image as object of critical speculation: It can be used to question the boundaries between authorship, automation, and culture.

48. Image as a form of accelerated abstraction: It is produced without consciousness, body or affect, but with visual logic.

49. Image as a reification of connections: It makes visible regularities without deep semantic content.

50. Image as a distorting mirror of desire: it does not return what is desired, but rather what the system infers it wants.

 

 

The 50 definitions of the image in the context of generative AI show that we can no longer understand the image as a passive unit of perception or as a stable representation of reality. Instead, it emerges as a technical, operational, and speculative entity. Each generated image not only manifests a process of statistical inference, but is also marked by economies of waiting, hidden infrastructures, algorithmic decisions, and a genealogy of visual techniques. The act of «waiting for an image» is a critical experience in itself, as it makes us aware of time, privilege, technical architecture, and the transformation of the image into a conditioned flow. From this perspective, the image no longer represents, but distributes, prioritizes, and calculates. AI does not produce images of the world, but of the system that produces them.

 

 

 

 

Practical applications of the definitions:

 

Critical Curation of Generative Images

These definitions allow for the construction of curatorial criteria for exhibitions that work with AI-generated images, focusing not on the «content» of the image, but on its technical, political, temporal, or epistemological nature.

 

Interface Design

Developers can incorporate these categories to create interfaces that present waiting, latency, or errors as part of the aesthetic experience, rather than hiding them.

 

Dataset analysis

Use these categories to classify training images and understand how aesthetic or ideological categories are distributed across datasets.

 

Critical digital media pedagogy

Use the definitions as a basis for courses in art, design, philosophy of technology, or visual studies that seek to problematize the image beyond its form.

 

Institutional critique of generative models

These definitions can inform institutional policies on algorithmic transparency and visual ethics, suggesting criteria for evaluating generative models in terms of bias, accessibility, or economic structure.

 

Speculative AI Architectures

System architects or artists can use the definitions to simulate alternative AI models in which time, waiting, or error are intentional and significant elements of visual production.

 

 

For example:

5. Data Set Analysis: Reconsidering the Image as a Distribution of Latent Biases and Structures

Core Concept

By redefining the image as a technical-algorithmic entity within a generative AI system, the need arises to address not only the visible result, but also the latent conditions that make it possible. The training dataset ceases to be a simple collection of images and becomes an empowered visual structure in which biases, repetitions, aesthetic hierarchies, omissions, and privileges are manifested. Thus, analyzing a dataset is not just about inspecting images, but also about mapping the patterns of visibility and exclusion they generate.

 

Practical application:

 

Using the 50 previous definitions (especially the ontological, epistemological, and political ones), a taxonomic analysis model of the dataset can be built:

For example, images can be classified according to their lighting patterns, facial proportions, or dominant artistic style.

Measure the geographic or ethnic distribution of faces.

Formal redundancies that promote «neutral» or dominant styles can also be identified.

Compare the number of images with neutral backgrounds to images with environmental context.

Evaluate implicit taxonomies: what types of bodies, spaces, or gestures are overrepresented or absent.

This can be translated into critical visual tools or interfaces, such as

Visual bias maps that show the density of certain types of images.

Latency visualizations that show which types of images appear faster or with greater certainty in generation.

Dialog interfaces that confront the user with the genealogy of a generated image and show its training «ancestors«.

 

 

6. Examples of Works, Installations and Interfaces:

 

A. «The Training Room (interactive installation)

An immersive space that simulates being inside the dataset. Images are projected as a floating cloud, and each visitor can «select« an image to see what 1,000 other similar images accompany it. Each selection displays metrics such as «frequency,» «stylistic repetition,» «predominant race,» «geographic origin,« and «level of detail.

Inspired by model training rooms, but visually revealing invisible structures.

 

B. “Latent Discriminator” (critical web interface)

A website in which the user generates any image using AI. The same interface then returns a detailed analysis of which subsets of the dataset most influenced that image, including percentages, styles, semantic classes, detected biases, and possible omissions. It would be both an educational and critical tool. The user becomes aware that the generated image does not emerge from nowhere, but from a structured and deeply unbalanced field.

 

C. “The Delay Mirror” (algorithmic performance and installation)

A camera captures your face, and an AI attempts to generate your portrait. But the system intentionally introduces variable delays based on statistical criteria extracted from the dataset: if your facial style is less frequent in the dataset, the image takes longer. A graph shows the system’s level of “familiarity” with your face in real time. A critique of unequal representation and visibility privileges. Not all bodies are read with the same fluency.

 

D. «Genealogy of an Image« (Advanced Visualization)

When generating an image with AI, the system not only gives you the final image, but also displays a genealogy of the training data: key images, dominant styles, visible/invisible authors. Each image is accompanied by a «generative transparency« score and a «complexity of inheritance« score.

This makes it possible to visualize the debt that each generated image owes to its origins.

 

E. «Latency Atlas (critical cartography of visual generation)

A large screen displays a map of the world divided into cultural, ethnographic and aesthetic zones. For each region, the average time it takes an AI system to generate images associated with that culture or style is measured and displayed. A bar graph shows which zones are «generated« the fastest.

Direct exposure of algorithmic inequalities. Speed is also bias.

 

 

7. Research References:

Yuk Hui – The Question Concerning Technology in China

Benjamin Bratton – The Stack: describes how platforms reorganize forms of sovereignty and time.

Mark Hansen – Feed-Forward: On the Future of 21st-Century Media: explores how 21st-century media preprocess the future.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Transitory Artefacts: : A Journey through Time and Media

 

 

 

 

 

Marcello Mercado

Transitory Artefacts: : A Journey through Time and Media

2000 -2025

 

 

 

Dead Code Anatomies

 

 

The Economy of Residues: Temporal Assemblages of Capital and Body

 

 

Gödel Suite, 2009

 

 

Marcello Mercado: Gödel Devices and Epistemic Apparatuses: A Performative Construction (Geneva, 2006)

 

 

Marcello Mercado: Images become containers; containers become images: A Performance in Seven Movements, 2005

 

 

DELETE

 

Bestiary for the Minds of the 21st Century: Genomic Opera

 

transferring, storing, sharing and hybriding: The perfect humus

 

 

Trace, Burn, Archive, 2005 – 2008

 

 

Index – Generator, Performance, 2004

 

 

 

Listening the chromosome 17, 1-channel Video Installation, 2023

 

 

Human Genome re-Activation – Low Lives 3 International Festival of Live Networked Performances, 2011

 

 

To whom belongs the Time?

 

 

 

How to explain to a dead mole the difference between…? Performance, 2001

 

 

Azimuth 77, Performance, 2006

 

 

Confinment, Artist´s book, 82 pages, 2020

 

Making consistent volatile ideas by broadcasting bio-information through plants, DNA, worms and Radio Frequencies

 

 

The algorithm, 2024, Process art – New media art – AI

 

 

How leopards caught leopards

 

 

Burning Garden – 2024

 

 

Curves, compost, forecasts and closures, Performance, 2020

 

 

 

 

Dead Code Anatomies

 

Marcello Mercado

Dead Code Anatomies

Installation

1990 – 2025

 

 

Dead Code Anatomies is a series of four algorithmic scripts that address the mechanisms by which physical violence is filtered, refracted, and suppressed by systems of procedural representation. Each work is based on a black and white analog photograph taken in 1990 using Kodak Plus-X 125 film. The photographs depict four indigenous individuals from the Qom people of Roque Sáenz Peña, Chaco, Argentina. At the time of the photograph, three of the children were smiling. Their mother, however, refused to be photographed; her image appears blurred due to her movement while hiding.

 

Thirtyfive years later, these blackandwhite images are reproduced visually and also reprocessed through algorithmic code. This rewriting does not reconstruct the analog moment – it distorts and encodes its afterlife. The algorithms do not articulate a testimony, but a systemic latency. Violence is not shown, but operated; it is rendered procedural, filtered through structures of malfunction, exception, and protocol collapse. The resulting works produce what might be called algorithmic remnants: structural residues of trauma stored in unresolved functions, aborted execution paths, and controlled error environments.

 

Each algorithm is printed on paper, framed in black, and placed behind glass. The four frames are mounted on a white wall in a linear arrangement. Below each frame, the corresponding analog photograph, taken in 1990, is affixed directly to the wall with visible tape. The photographs retain their original format and scale. The exhibition space is lit with soft, diffused white light. The floor is gray concrete. The installation is minimal but preciseforegrounding the tension between materiality, evidence, and procedural abstraction.

 

This configurationthe physical copresence of code and imageanchors the work within the field of algorithmic photographic installation. The analog photographs suggest a past moment of apparent tranquility or even joy; the algorithmic layers introduce a speculative but historically grounded prognosis: one in which the photographed bodies may have been subjected to institutional abandonment, racialized violence, or systemic neglect.

 

Time in this series is not linear, but procedural. The 35-year gap functions as latency: a deferred reckoning processed through media protocols, archival dysfunction, and institutional delay. Violence is not archived – it is coded. And what remains is not memory but dead code.

 

The scripts simulate different modes of institutional failure:

 

 

Part I maps trauma within facial regions and destabilizes clinical coherence.

 

 

 

Part II codifies signs of abuse but collapses under semantic overload, triggering a silent mode.

 

 

 

Part III references sexual violence, generating illegible syntax and fragmented output.

 

 

Part IV enacts censorship protocols when thresholds of resistance or rupture are detected.

 

 

 

.

 

 

 

(Spanish)

 

Anatomías de código muerto
Instalación
1990 – 2025

Anatomías de código muerto es una serie de cuatro scripts algorítmicos que abordan los mecanismos mediante los cuales la violencia física es filtrada, refractada y suprimida por sistemas de representación procedimental. Cada obra parte de una fotografía analógica en blanco y negro tomada en 1990 con película Kodak Plus-X 125. Las imágenes retratan a cuatro personas indígenas de la comunidad Qom en Roque Sáenz Peña, Chaco, Argentina. En el momento de la toma, tres de los niños sonreían. Su madre, en cambio, se negó a ser fotografiada; su imagen aparece movida debido a su desplazamiento al ocultarse.

Treinta y cinco años más tarde, estas imágenes en blanco y negro son reproducidas visualmente y también reprocesadas mediante código algorítmico. Esta reescritura no reconstruye el momento analógico: lo distorsiona y codifica su persistencia. Los algoritmos no articulan un testimonio, sino una latencia sistémica. La violencia no se muestra, sino que se ejecuta; se convierte en procedimiento, filtrada a través de estructuras de malfunción, excepción y colapso de protocolos. Las obras resultantes generan lo que puede denominarse restos algorítmicos: residuos estructurales del trauma almacenados en funciones no resueltas, rutas de ejecución abortadas y entornos de error controlado.

Cada algoritmo se imprime en papel, se enmarca en negro y se presenta detrás de un vidrio. Los cuatro marcos se disponen en línea sobre una pared blanca. Debajo de cada marco, la fotografía analógica correspondiente, tomada en 1990, se adhiere directamente a la pared con cinta adhesiva visible. Las fotografías conservan su formato y escala originales. El espacio expositivo está iluminado con luz blanca difusa. El piso es de hormigón gris. La instalación es mínima pero precisa: subraya la tensión entre materialidad, evidencia y abstracción procedimental.

Esta configuración —la copresencia física de código e imagen— sitúa el trabajo dentro del campo de la instalación fotográfica algorítmica. Las fotografías analógicas sugieren un momento pasado de aparente calma o incluso alegría; las capas algorítmicas introducen una prognosis especulativa, pero históricamente fundamentada: una en la que los cuerpos fotografiados pudieron haber sido objeto de abandono institucional, violencia racializada o negligencia sistémica.

Los scripts simulan diferentes modos de fallo institucional:

Parte I mapea el trauma en regiones faciales y desestabiliza la coherencia clínica.

Parte II codifica signos de abuso pero colapsa por sobrecarga semántica, activando un modo de silencio.

Parte III refiere a la violencia sexual, generando sintaxis ilegible y salida fragmentada.

Parte IV activa protocolos de censura cuando se detectan umbrales de resistencia o ruptura.

El tiempo en esta serie no es lineal, sino procedimental. El intervalo de 35 años funciona como latencia: un ajuste diferido procesado a través de protocolos mediáticos, disfunción archivística y demora institucional. La violencia no se archiva: se codifica. Y lo que permanece no es memoria, sino código muerto.

.

.

(German)

Toter Code Anatomien
Installation
1990 – 2025

Toter Code Anatomien ist eine Serie von vier algorithmischen Skripten, die die Mechanismen untersuchen, durch welche physische Gewalt von prozeduralen Repräsentationssystemen gefiltert, gebrochen und unterdrückt wird. Jede Arbeit basiert auf einer analogen Schwarzweiß-Fotografie, die 1990 mit Kodak Plus-X 125-Film aufgenommen wurde. Die Fotografien zeigen vier indigene Personen des Qom-Volkes aus Roque Sáenz Peña, Chaco, Argentinien. Zum Zeitpunkt der Aufnahme lächelten drei der Kinder. Die Mutter weigerte sich jedoch, fotografiert zu werden; ihr Bild ist verwischt, da sie sich während der Aufnahme verbarg.

Fünfunddreißig Jahre später werden diese Schwarzweiß-Bilder sowohl visuell reproduziert als auch durch algorithmischen Code neu verarbeitet. Diese Umschreibung rekonstruiert nicht den analogen Moment – sie verzerrt und codiert sein Nachleben. Die Algorithmen formulieren kein Zeugnis, sondern eine systemische Latenz. Gewalt wird nicht gezeigt, sondern ausgeführt; sie wird prozedural, gefiltert durch Strukturen des Fehlverhaltens, der Ausnahme und des Protokollzusammenbruchs. Die resultierenden Werke erzeugen sogenannte algorithmische Überreste: strukturelle Rückstände von Trauma, gespeichert in ungelösten Funktionen, abgebrochenen Ausführungspfaden und kontrollierten Fehlerumgebungen.

Jeder Algorithmus wird auf Papier gedruckt, schwarz gerahmt und hinter Glas präsentiert. Die vier Rahmen sind in linearer Anordnung an einer weißen Wand montiert. Unter jedem Rahmen ist das entsprechende analoge Foto von 1990 direkt mit sichtbarem Klebeband an die Wand geklebt. Die Fotografien behalten ihr ursprüngliches Format und Maß bei. Der Ausstellungsraum ist mit weichem, diffusem Weißlicht beleuchtet. Der Boden ist aus grauem Beton. Die Installation ist minimal, aber präzise – sie betont die Spannung zwischen Materialität, Evidenz und prozeduraler Abstraktion.

Diese Konfiguration – die physische Ko-Präsenz von Code und Bild – verankert die Arbeit im Feld der algorithmischen fotografischen Installation. Die analogen Fotografien deuten auf einen vergangenen Moment scheinbarer Ruhe oder sogar Freude hin; die algorithmischen Schichten hingegen führen eine spekulative, aber historisch fundierte Prognose ein: eine, in der die abgebildeten Körper institutionellem Verlassen, rassifizierter Gewalt oder systemischer Vernachlässigung ausgesetzt gewesen sein könnten.

Die Skripte simulieren verschiedene Formen institutionellen Versagens:

Teil I kartiert Traumata in Gesichtsregionen und destabilisiert klinische Kohärenz.

Teil II codiert Anzeichen von Missbrauch, kollabiert jedoch unter semantischer Überlastung und aktiviert einen Schweigemodus.

Teil III verweist auf sexuelle Gewalt, erzeugt jedoch unlesbare Syntax und fragmentierte Ausgaben.

Teil IV setzt Zensurprotokolle in Gang, sobald Widerstands- oder Bruchschwellen erkannt werden.

Zeit ist in dieser Serie nicht linear, sondern prozedural. Das 35-jährige Intervall fungiert als Latenz: eine verzögerte Auseinandersetzung, verarbeitet durch mediale Protokolle, Archivdysfunktionen und institutionelle Verzögerung. Gewalt wird nicht archiviert – sie wird codiert. Und was bleibt, ist nicht Erinnerung, sondern toter Code.

 

Marcello Mercado – AI – Books – Photography

 

Marcello Mercado

AI – Books

 

 

Marcello Mercado

What I mean to say: There is a war on

Was ich damit sagen will: Es herrscht Krieg

398 pages

EX_AI-Book

2024

 

 

 

 

 

 

Marcello Mercado

High volumes of Post-Post-After-After

AI-Book, 623 pages

2024

 

 

 

 

Marcello Mercado

Mathematical Heads

AI-Book, 132 pages

2024

 

 

 

 

 

 

Marcello Mercado

El verde que te rodea / The green that surrounds you / Das Grün, das dich umgibt

AI-Book / Artificial intelligence

2023

 

263 ways of looking at Metahumans / 263 formas de ver a los metahumanos / 263 Sichtweisen auf Metamenschen.

 

 

 

 

 

 

Marcello Mercado

BRK 6s7gz sghxgcgcw ehwgd

Artist´s book, 280 pages

2012 – 2023

 

 

 

 

 

 

 

 

Marcello Mercado

To paint with Data I

AI-Book, 310 pages

2023

 

 

 

 

 

 

Marcello Mercado

To paint with Data II

AI-Book, 138 pages

2023

 

 

 

 

 

Marcello Mercado

Licht Pentimento

Artist´s book, 98 pages

2004

 

 

 

 

Marcello Mercado

FHLR

42 pages, artist´s book

2010

 

 

 

 

Marcello Mercado

XIP

Artist´s book

2003, re-edited 2007

 

 

 

 

 

 

 

Marcello Mercado

The journey with the Beagle

watercolor on paper

Artist´s Book, 24 pages

2010

 

 

 

 

 

 

 

Marcello Mercado

Doppelt (Double)

Photo book, 30 pages

2013

 

The photographic book is a visual exploration of the image through two distinct techniques: The use of long shots with bulb and softart.

 

 

 

 

 

 

 

 

 

Marcello Mercado – The Hypergaussian War – IA Film Animation – 2025

 

 

Marcello Mercado
The Hypergaussian War
IA Film Installation, color, sound, 73 minutes 03″
2025

 

Marcello Mercado
The Hypergaussian War  (Silent version)
IA Film Installation, color, silent, 73 minutes 03″
2025

 

 

The Hypergaussian War is an AI-powered feature film that unfolds a war between obsolete video game characters. Forgotten avatars clash in massive battles, wielding magical forces and navigating vast, surreal landscapes where the history of video games converges with the intricate worlds of 16th-century Flemish painting, itself transfigured into a haunting, playable simulation. As they traverse these impossible realms—video games that never existed and paintings that now pulse with digital life—they engage in relentless action, each seeking ultimate victory.

 

Artistic Concept & AI Integration

The film reconstructs the evolution of digital game aesthetics, featuring obsolete low-resolution characters from various historical periods:
• 8-bit and 16-bit era (1970s-1980s): Blocky, pixelated sprites.
• 16-bit and 32-bit era (1990s): More detailed sprites and improved animations.
• Early 3D polygonal era (1995-2000): Basic 3D models with low-resolution textures.
• Realism and motion capture (2000-2010): Increasing complexity in character detail and animation.
• High-definition and AI-driven realism (2010-2020): Photorealistic models, AI-enhanced expressions.

These historical aesthetics are intentionally mixed and juxtaposed, creating a layered visual narrative where different generations of characters coexist in the same war. The AI algorithms generate procedural battle sequences and dynamic interactions, further blurring the lines between past and future visual paradigms.
The technical narrative of the film integrates AI-driven editing techniques, reminiscent of classical musical film editing, while deliberately incorporating the visual imperfections of old cinema techniques. For example, the camera movements are inspired by faulty mechanical cranes and imprecise tracking systems, reinforcing the artificiality and nostalgic realism of the simulation.

Hypergaussian functions play a crucial role in both the film’s AI architecture and its conceptual foundation. These functions allow for the generation of complex, high-dimensional transformations that shape the war’s chaotic yet structured aesthetic. In practical terms, they enable:
• Procedural motion synthesis, controlling the erratic yet fluid movement of obsolete characters.
• AI-driven glitch aesthetics, where battle sequences incorporate controlled distortions and unpredictable shifts.
• Audio-visual synchronization, influencing how visual and sonic glitches evolve over time.
The title The Hypergaussian Wars reflects this mathematical influence, suggesting a war fought within high-dimensional spaces, where obsolete forms struggle for relevance in a continuously evolving digital landscape.

 

 

(Spanish)

The Hypergaussian War es un largometraje impulsado por inteligencia artificial que narra una guerra entre personajes obsoletos de videojuegos. Avatares olvidados chocan en batallas masivas, empuñando fuerzas mágicas y recorriendo vastos paisajes surrealistas donde la historia de los videojuegos converge con los intrincados mundos de la pintura flamenca del siglo XVI, transfigurada en una inquietante simulación jugable. A medida que atraviesan estos reinos imposibles—videojuegos que nunca existieron y pinturas que ahora laten con vida digital—se sumergen en una acción implacable, cada uno en busca de la victoria definitiva.

Concepto artístico e integración de IA

La película reconstruye la evolución de la estética digital en los videojuegos, representando personajes de baja resolución de diferentes épocas históricas:
• Era de 8 bits y 16 bits (1970-1980): Sprites toscos y pixelados.
• Era de 16 bits y 32 bits (1990): Sprites más detallados y animaciones mejoradas.
• Primera era del 3D poligonal (1995-2000): Modelos 3D básicos con texturas de baja resolución.
• Inicio del realismo y motion capture (2000-2010): Mayor nivel de detalle y animaciones más complejas.
• Alta definición y realismo impulsado por IA (2010-2020): Modelos fotorrealistas y expresiones faciales generadas por inteligencia artificial.
Estas estéticas históricas se mezclan y yuxtaponen intencionalmente, generando una narrativa visual estratificada en la que distintas generaciones de personajes coexisten en la misma batalla. Algoritmos de IA generan escenas de combate procedurales e interacciones dinámicas, difuminando aún más los límites entre paradigmas visuales pasados y futuros.
En el nivel técnico-narrativo, la película integra técnicas de edición con IA, inspiradas en montajes de cine musical, al tiempo que conserva deliberadamente imperfecciones visuales de antiguas técnicas cinematográficas. Por ejemplo, los movimientos de cámara se inspiran en grúas mecánicas defectuosas y sistemas de tracking imprecisos, reforzando así la artificialidad y el realismo nostálgico de la simulación.

Las funciones hipergaussianas desempeñan un papel central tanto en la arquitectura de IA de la película como en su fundamento conceptual. Estas funciones permiten la generación de transformaciones complejas y de alta dimensión, que modelan la estética caótica pero estructurada de la guerra. En términos concretos, posibilitan:
• Síntesis de movimiento procedural, controlando los desplazamientos irregulares pero fluidos de los personajes obsoletos.
• Estética glitch generada por IA, donde las escenas de combate presentan distorsiones controladas y desplazamientos impredecibles.
• Sincronización audiovisual, que regula la evolución de glitches visuales y sonoros a lo largo del tiempo.
El título The Hypergaussian Wars refleja esta influencia matemática y sugiere una guerra librada en espacios de alta dimensionalidad, donde formas obsoletas luchan por su relevancia en un paisaje digital en constante transformación.

 

 

(German)

The Hypergaussian War ist ein von KI betriebenes Spielfilmprojekt, das einen Krieg zwischen veralteten Videospielcharakteren entfaltet. Vergessene Avatare kämpfen in gewaltigen Schlachten, setzen magische Kräfte ein und durchqueren weite, surreale Landschaften, in denen die Geschichte der Videospiele mit den detailreichen Welten der flämischen Malerei des 16. Jahrhunderts verschmilzt – selbst verwandelt in eine gespenstische, spielbare Simulation. Während sie diese unmöglichen Reiche durchqueren – Videospiele, die nie existierten, und Gemälde, die nun mit digitalem Leben pulsieren – stürzen sie sich in unerbittliche Action, jeder auf der Suche nach dem ultimativen Sieg.

Künstlerisches Konzept & KI-Integration

Der Film rekonstruiert die Entwicklung digitaler Spielästhetik, indem er veraltete, niedrig aufgelöste Charaktere aus verschiedenen historischen Epochen darstellt:
• 8-Bit- und 16-Bit-Ära (1970-1980): Klobige, pixelige Sprites.
• 16-Bit- und 32-Bit-Ära (1990): Detailliertere Sprites und verbesserte Animationen.
• Frühe 3D-Polygon-Ära (1995-2000): Einfache 3D-Modelle mit niedrig aufgelösten Texturen.
• Anfänglicher Realismus und Motion Capture (2000-2010): Zunehmende Detailgenauigkeit und komplexere Animationen.
• Hochauflösung und KI-gesteuerter Realismus (2010-2020): Fotorealistische Modelle, KI-gestützte Gesichtsausdrücke.

Diese historischen Ästhetiken werden bewusst vermischt und gegenübergestellt, wodurch eine mehrschichtige visuelle Erzählung entsteht, in der verschiedene Generationen von Charakteren in derselben Schlacht koexistieren. KI-Algorithmen generieren prozedurale Kampfszenen und dynamische Interaktionen, wodurch die Grenzen zwischen vergangenen und zukünftigen visuellen Paradigmen weiter verschwimmen.
Auf technisch-narrativer Ebene integriert der Film KI-gesteuerte Schnitttechniken, die an klassische Musikfilmmontagen erinnern, während gleichzeitig visuelle Unvollkommenheiten alter Filmtechniken bewusst beibehalten werden. Beispielsweise sind die Kamerabewegungen von defekten mechanischen Kränen und ungenauen Tracking-Systemen inspiriert, um die Künstlichkeit und nostalgische Realismus der Simulation zu verstärken.

Hypergauss-Funktionen spielen eine zentrale Rolle sowohl in der KI-Architektur des Films als auch in seinem konzeptionellen Fundament. Diese Funktionen ermöglichen die Erzeugung komplexer, hochdimensionaler Transformationen, die die chaotische, aber strukturierte Ästhetik des Krieges formen. Konkret ermöglichen sie:
• Prozedurale Bewegungssynthese, die die unregelmäßigen, aber fließenden Bewegungen veralteter Charaktere steuert.
• KI-generierte Glitch-Ästhetik, bei der Kampfszenen kontrollierte Verzerrungen und unvorhersehbare Verschiebungen enthalten.
• Audiovisuelle Synchronisation, die beeinflusst, wie visuelle und klangliche Glitches sich im Laufe der Zeit entwickeln.
Der Titel The Hypergaussian Wars spiegelt diesen mathematischen Einfluss wider und suggeriert einen Krieg, der in hochdimensionalen Räumen ausgetragen wird, in denen veraltete Formen um ihre Relevanz in einer sich ständig weiterentwickelnden digitalen Landschaft kämpfen.

 

 

(French)

La Guerre Hypergaussienne est un long-métrage propulsé par l’intelligence artificielle qui raconte une guerre entre des personnages de jeux vidéo obsolètes. Des avatars oubliés s’affrontent dans des batailles titanesques, maniant des forces magiques et explorant d’immenses paysages surréalistes où l’histoire des jeux vidéo converge avec les mondes complexes de la peinture flamande du XVIe siècle, elle-même transformée en une troublante simulation jouable. En traversant ces royaumes impossibles—des jeux vidéo qui n’ont jamais existé et des peintures qui vibrent désormais d’une vie numérique—ils se lancent dans une action acharnée, chacun cherchant la victoire ultime.

Concept Artistique & Intégration de l’IA

Le film reconstruit l’évolution de l’esthétique des jeux numériques, en mettant en scène des personnages obsolètes à basse résolution issus de différentes périodes historiques :

Ère 8-bit et 16-bit (années 1970-1980) : Sprites pixelisés et anguleux.
Ère 16-bit et 32-bit (années 1990) : Sprites plus détaillés et animations améliorées.
Première ère de la 3D polygonale (1995-2000) : Modèles 3D rudimentaires avec textures en basse résolution.
Réalisation réaliste et capture de mouvement (2000-2010) : Complexité croissante des détails et animations.
Haute définition et réalisme basé sur l’IA (2010-2020) : Modèles photoréalistes et expressions améliorées par l’IA.

Ces esthétiques historiques sont volontairement mélangées et juxtaposées, créant une narration visuelle stratifiée où différentes générations de personnages coexistent au sein d’une même guerre. Les algorithmes d’IA génèrent des séquences de bataille procédurales et des interactions dynamiques, brouillant encore davantage les frontières entre les paradigmes visuels du passé et du futur.

La narration technique du film intègre des techniques de montage pilotées par l’IA, évoquant le montage classique du cinéma musical, tout en incorporant délibérément les imperfections visuelles des anciennes techniques cinématographiques. Par exemple, les mouvements de caméra s’inspirent de grues mécaniques défectueuses et de systèmes de suivi imprécis, renforçant l’artificialité et le réalisme nostalgique de la simulation.

Les Fonctions Hypergaussiennes et l’Architecture de l’IA

Les fonctions hypergaussiennes jouent un rôle crucial tant dans l’architecture IA du film que dans sa fondation conceptuelle. Ces fonctions permettent la génération de transformations complexes et multidimensionnelles qui façonnent l’esthétique chaotique mais structurée de la guerre. Concrètement, elles permettent :

Synthèse de mouvement procédurale, contrôlant le déplacement erratique mais fluide des personnages obsolètes.
Esthétique glitch pilotée par l’IA, où les séquences de combat intègrent des distorsions contrôlées et des variations imprévisibles.
Synchronisation audio-visuelle, influençant l’évolution temporelle des glitchs visuels et sonores.

Le titre La Guerre Hypergaussienne reflète cette influence mathématique, suggérant une guerre menée dans des espaces multidimensionnels, où des formes obsolètes luttent pour leur pertinence dans un paysage numérique en perpétuelle évolution.

.

 

 

 

Marcello Mercado, What I mean to say: There is a war on, EX_AI-Book, 2024

 

Marcello Mercado

What I mean to say: There is a war on

Was ich damit sagen will: Es herrscht Krieg

398 pages

EX_AI-Book

2024

 

 

 

 

 

 

The algorithm, 2024, Process art – New media art – AI

 

 

 

Marcello Mercado

The Algorithm

2024

Process Art – New Media Art – Photography – AI

 

 

The Algorithm is an exploration of the intersection between obsolete technologies, photographic processes, and the evolving nature of AI. It traces the transformation of a photographic recording apparatus over five decades, employing both historical reference and technological development to produce a new, integrated system.

In 1972, the artist’s grandfather was photographed using a Century Number 7 large-format folding camera (*), an early 20th-century device that captured highly detailed portraits. In 1980, the same camera was used to capture the artist’s own portrait, long after his grandfather’s death. This juxtaposition of temporal gaps sets the tone for the project’s investigation of technological evolution. At that point, the camera was relegated to storage, its physical obsolescence marking a moment of suspended history.

In 1998, the artist recovered the camera, bringing it back from 26 years of dormancy. In 2024, the artist revisited the camera, now using a 2009 iPod (**) to photograph the remnants of the once-relevant tool, an obsolete device that bridged the analog and digital eras. The final photograph of the camera was taken using an iPhone 14, the latest in digital photography technology, marking the completion of a technological cycle from the heavy, mechanical analog to the sophisticated, AI-integrated systems of today.

The project moves beyond symbolic or representational frameworks to focus on the material and procedural connections between technologies. Each image in the process not only documents a specific object but contributes to an unfolding algorithm that connects the past to the present, analog to digital, and mechanical to artificial intelligence. The final output is an algorithm generated through AI that incorporates the visual information collected by the cameras across the decades.

This work draws on media archaeology as articulated by Siegfried Zielinski, whose emphasis on the recuperation of discarded technologies allows for a deeper understanding of how these objects continue to influence contemporary culture. The Algorithm operates within this framework, not only as a historical narrative but as a living exploration of how obsolete technologies carry forward the traces of their past functions, even in their residual states. Through this lens, obsolescence becomes a productive force, fueling a new cycle of technological engagement.

.

.

The Process:

 

A.

1972: A portrait of the artist’s grandfather is captured in the public square of the artist’s hometown by an anonymous photographer, using the Century Number 7 camera. This moment marks the initial interaction between the artist’s family history and the medium of photography. The camera, designed for high-resolution, large-format images, signifies a moment where technology is embedded in personal memory.

 

 

01. Portrait of the artist’s grandfather, 1972

 

 

B.

1980: The same camera is used to photograph the artist. In a subtle twist, the portrait is given a «vintage mask» effect, which makes the photograph appear older than the one taken in 1972, highlighting the contrast between perceived time and actual technological progress.

 

02. Portrait of the artist, 1980

 

.

 

C.

1998: The remains of the Century Number 7 camera, stored for decades, is rediscovered by the artist.

 

.

 

D.

2024: Using a 2009 iPod, the artist photographs the remains of the Century Number 7 camera, marking the crossover from physical to portable digital technology. Despite its own obsolescence, the iPod serves as a transitional tool in the photographic evolution.

 

 

03. 2009 iPod taking a photo of the remains of the Century Number 7 camera

 

04. 2009 iPod photograph of the remains of the Century Number 7 camera

 

 

E.

2024: The final photograph is taken with an iPhone 14, representing the pinnacle of digital and AI-integrated photography. The phone, lighter, faster, and more accessible than its predecessors, signifies the completion of the technological transition from analog to digital, from mechanical to AI-powered systems.

 

05. Remains of the rescued Century Number 7 camera

 

 

 

 

 

Each photograph taken during this process contributes to an algorithm that the artist uses to generate an on-demand artificial intelligence response. The algorithm serves not only as a documentation of the passage from one device to another but also as a living expression of how technologies evolve, disappear, and, in their obsolescence, continue to inform new ways of thinking and creating.

 

 

 

The performance shifts from an historical recounting of technological change to an active, ongoing process of engagement with technology, memory, and cultural history. Each device, from the Century Number 7 to the iPhone 14, acts as both a marker of time and an agent of transformation, tracing a line from the past to the present, and ultimately, through artificial intelligence, into a new future.

The Algorithm not only addresses the obsolescence of technologies but also reveals how these obsolete forms continue to shape contemporary creative practice. It is an act of media archaeology that does not merely recover the past but allows it to inform the future. The work demonstrates the continuous presence of old technologies in the fabric of modernity, revealing their residual power to inform and generate new forms of artistic and intellectual expression.

 

 

F.

The five images obtained were processed by an artificial intelligence language model to obtain an on-demand algorithm.

The algorithm obtained was regenerated to obtain a second variation of it.

 

 

Algorithm 01- AI

PHOTO 01 + PHOTO 02 + PHOTO 03 + PHOTO 04 + PHOTO 05

 

A step-by-step algorithm that traces the transition of a photographic recording apparatus from an analog, physically heavy camera to AI

This algorithm provides a narrative framework for tracing the technological evolution from analog cameras to AI, incorporating historical context, personal connections, and the protagonist’s interaction with various devices:

  1. Introduction: Begin by introducing the task of tracing the transition of a photographic apparatus from analog to AI, spanning three cameras and a language model.
  2. Initial Setting: Describe the initial scenario where a Century Number 7 camera, known for its green flexible bellows, captured portraits in a town plaza in Argentina in 1972.
  3. Camera Evolution: Detail the history and significance of the Century Camera Co., founded in 1900, its acquisition by Eastman Kodak Co. in 1903, and eventual discontinuation in 1920.
  4. First Transition: Transition to the iPod era, introduced by Apple Inc. in 2001, offering portable digital audio players. Describe its features, including weight, storage capacity, screen size, battery life, and data transfer speed.
  5. Second Transition: Move forward to the iPhone 14, the latest smartphone model as of the present. Provide details on its announcement, release date, screen size, camera improvements, and connectivity features.
  6. Personal Connection: Introduce the personal connection to the story, mentioning that the protagonist’s grandfather was born in 1898 and the protagonist in 1963. Emphasize the transition from older to newer technologies over their lifetimes.
  7. Storage of Old Devices: Mention how the Century Number 7 camera was stored in a warehouse for 26 years before being rediscovered in 1998.
  8. Capture with Obsolete iPod: Describe how a photo of the Century Number 7 camera was taken using a 2009 iPod, marking the intersection of old and new technologies.
  9. Final Transition: Capture the last photo of the Century Number 7 camera with an iPhone 14, representing the ultimate evolution in technology from analog to digital and from physical to AI.
  10. Personal Reflection: Conclude with a reflection on the fascination with both old and new technologies and how the protagonist navigates between them using an iPhone 14 and ChatGPT.

 

 

 

Algorithm 02- AI

 

PHOTO 01 + PHOTO 02 + PHOTO 03 + PHOTO 04 + PHOTO 05

 

  1. Introduction:
    • Begin by outlining the objective of tracing the transition of a photographic apparatus from analog to AI, covering three distinct camera technologies and their evolution alongside a language model.
  2. Initial Setting:
    • Set the scene with the Century Number 7 camera capturing portraits in a town plaza in Argentina in 1972, highlighting its distinctive green flexible bellows.
  3. Camera Evolution:
    • Provide a historical overview of the Century Camera Co., established in 1900 and its subsequent acquisition by Eastman Kodak Co. in 1903, leading to its discontinuation in 1920.
  4. First Transition:
    • Transition to the iPod era, initiated by Apple Inc. in 2001, featuring portable digital audio players. Detail the iPod’s key features such as weight, storage capacity, screen size, battery life, and data transfer speed.
  5. Second Transition:
    • Progress to the iPhone 14, the latest smartphone model available at the time. Discuss its announcement, release date, screen size, camera enhancements, and connectivity capabilities.
  6. Personal Connection:
    • Introduce a personal connection to the narrative, mentioning the birth years of the protagonist’s grandfather (1898) and the protagonist (1963). Emphasize the generational shift from older to newer technologies.
  7. Storage of Old Devices:
    • Describe the circumstances of the Century Number 7 camera being stored in a warehouse for 26 years before its rediscovery in 1998, highlighting the passage of time and technological advancements during its hiatus.
  8. Capture with Obsolete iPod:
    • Explain how a photo of the Century Number 7 camera was taken using a 2009 iPod, symbolizing the convergence of traditional and modern technologies.
  9. Final Transition:
    • Conclude with the capture of the final photo of the Century Number 7 camera using an iPhone 14, symbolizing the culmination of technological evolution from analog to digital and from physical to AI.
  10. Personal Reflection:
  • Reflect on the fascination with both old and new technologies, and how the protagonist navigates between them using an iPhone 14 and ChatGPT, underscoring the interplay between past, present, and future in technological advancement.

This algorithm provides a structured approach to narrating the journey of technological evolution, weaving together historical context, personal connections, and the protagonist’s interaction with various devices.

 

 

 

 

 

 

 

Notes:

(*).

The Century Number 7 large format folding camera is a model from the early 20th century and was manufactured by the Century Camera Company. Here are some of its key features:
Film Format: The Century Number 7 camera was designed to use large format film, usually in sizes such as 4×5 inches or 8×10 inches. This made it possible to capture high resolution images with a high level of detail.
Folding design: As the name suggests, this camera had a folding design that allowed it to be collapsed for easy transportation and storage. When folded, the camera became more compact and portable, making it ideal for photographers who needed mobility.
Lens Focusing: The Century Number 7 typically came equipped with a high-quality focusing lens that offered manual focusing options as well as the use of an extendable bellows. This allowed the focal length to be adjusted to achieve optimal sharpness in the image.
Rugged construction: Large-format cameras such as the Century Number 7 were often constructed of durable materials such as wood and metal, giving them a sense of solidity and stability. This ruggedness contributed to the camera’s longevity and ability to withstand the rigors of outdoor use.
Tripod Mount: Due to its nature and size, the Century Number 7 would typically be mounted on a tripod to ensure stability during photography. Tripods provided solid support and allowed the height and angle of the camera to be adjusted to suit the potographer’s needs.

 

 

(**)

The 2009 version of the iPod did not have a built-in camera. However, in September 2009, Apple released the fifth-generation iPod nano with a built-in video camera. Here are the key features of this camera:Resolution: The camera on the fifth-generation iPod nano had a video resolution of 640 x 480 pixels (VGA) at 30 frames per second.
Storage capacity: The iPod nano allowed you to record video directly on the device and store it in its internal memory, which varied depending on the model’s capacity (8GB or 16GB).
Video format: Videos were recorded in H.264 format and saved as .MOV files.
Other features: The iPod nano’s camera could take still photos at a resolution of 640 x 480 pixels. It also offered basic video editing capabilities right on the device.

 

 

 

 

 

Marcello Mercado, Mathematical Heads, AI-Book, 2024

 

Marcello Mercado

Mathematical Heads

AI-Book, 132 pages

2024