Skip to main content

In 2017, when we decided to focus on building an AI company from the ground up, it was still the territory of science fiction for most people. Not that AI wasn’t in the market, but rather it was still an invisible layer of automation and assistance in various fields. 

That presented a unique challenge, because our audience has always been other businesses. It’s not too far-fetched to call it more of a “magic trick” than a sales process trying to sell something that most people associate with epic post-apocalyptic movies.

At first, the question seemed to be, how do we sell people something they are programmed to be skeptical of? But the question really was, how do we reveal to people that AI has been shaping their lives for a long time so they can get emotionally comfortable with the uncertainty of technological risk?

For at least the last 40 years, AI has been a field of study and slow market exposure. That exposure has been growing in direct correlation to the curve that describes artificial intelligence capabilities evolving from very narrow to more general. If that’s a cone, we are somewhere in the wide part of the cone.

Personally, the first time I realized I was using AI was somewhere in the 00s using the Adobe suite for assisted fill and erase tools that were much more precise than their predecessors who were algorithmic themselves but rougher. 

The new tools seemed to predict what is impossible to communicate through the cursor in terms of user intent to alter a graphic. I was simply pointing and telling the AI, “do your magic here,” but the experience made me feel like I was being helped to alter the pixels myself. A repeat process with a fuzzy generative outcome that needed user triggering, in-the-moment configuration and approval to know when it was done… But nonetheless, the system was predicting, based on limited input signals from the mouse and keyboard, what precise and highly variable changes I wanted to the underlying graphics. It was right most of the time, and it felt like magic.

Even in today’s world of agents and whispers of AGI, I still think that experience missions should be central to how we shape AI. How can AI augment our experiences and capabilities to make them better? How can the AI user have a clear, in-the-moment understanding of the benefits they received by using the technology? How can they be engaged in the process and not just the result?

When I think back to AI’s early use in healthcare (virtual protein synthesis and other visual tasks), it occurred to me how the damn breaking moment of modern generative AI occurred when AI evolved to allow people to be editors. AI jumped the gap when it allowed people to show off a higher version of themselves. That’s a powerful emotional benefit. 

When people suddenly do work that is higher level and engages different knowledge centers and skills in our brain, it creates dopamine and locks us in. In that moment, a technology that is designed to think like a human transformed the way humans were thinking about themselves. All it took was the AI evolving from a task repeater to a synthesizer of solutions. This is true even if the solutions are based on amalgamated data and not created. Generative AI allows the willing or reluctant wine maker to become the wine taster.

The History Behind Generative Systems

We know when these generative technologies started to change our world, but have you ever considered who “generated” the first images and why? 

The answer was both obvious and astonishing. In this case, the origin of the tech behind chat bots, media generators, and generative AI tech in general started with a pursuit to express the inexpressible.

The geometric interiors of mosques trace back to a convergence of mathematics, craft, and spiritual philosophy in the medieval Islamic world (roughly 8th–15th centuries). As Islamic societies expanded across regions like Persia, Central Asia, North Africa, and Spain, scholars translated and built upon Greek, Roman, and Indian mathematical texts, especially Euclidean geometry. This knowledge didn’t stay abstract; it flowed directly into architecture and craft traditions. At the same time, religious norms often discouraged figurative imagery in sacred spaces, which pushed artists and architects toward non-representational visual systems. Geometry became the ideal medium: universal, precise, and capable of expressing order without depicting living forms. Using simple tools like a compass and straightedge, artisans constructed circles, grids, and polygons that could be repeated and extended indefinitely, forming the foundation of what we’d now recognize as rule-based visual generation.

Over time, this approach evolved into a highly specialized discipline. Pattern designers (often called muhandis or geometric draftsmen) developed sophisticated construction methods, while master craftsmen (tile setters, stucco carvers, woodworkers) translated those designs into physical surfaces. In places like Persia and the Ottoman Empire, entire architectural programs were planned around proportional systems that dictated where and how patterns would appear. In North Africa and Al-Andalus, artisans perfected techniques like zellij tilework, assembling intricate mosaics from hand-cut pieces. A key innovation was the use of modular systems, such as star polygons and later girih tiles, which allowed complex patterns to be generated from a limited vocabulary of shapes. These systems enabled not just repetition, but controlled variation, where patterns could evolve across a surface while maintaining internal coherence.

By the 13th–15th centuries, some of these designs reached remarkable levels of complexity. Certain Persian patterns, like those seen at the Darb-i Imam shrine, exhibit properties similar to what modern mathematics calls quasi-periodic tiling: ordered but non-repeating structures that weren’t formally described until the 20th century. Whether or not medieval artisans fully understood the mathematics in modern terms, they clearly developed practical methods for constructing patterns with long-range order and local variation. Pattern books and scrolls like those preserved at Topkapi Palace suggest that these systems were documented and transmitted as generative frameworks, not just static designs.

What makes this history especially relevant today is that mosque geometry embodies the core principle behind modern generative systems: complexity emerging from simple rules. A vast, intricate wall pattern might be governed by just a few parameters: a grid type, a set of angles, a repetition rule, and a transformation logic. In that sense, these interiors function like early analog versions of generative algorithms, where the “code” is executed by human hands rather than machines. This lineage connects directly to contemporary practices in computational design, parametric architecture, and even generative AI. 

Just as a modern model compresses visual knowledge into a set of weights and regenerates images from that latent structure, Islamic geometric art compresses visual complexity into a compact system of geometric rules, then unfolds it across space in ways that feel both infinite and intentional.

 

Subscribe to Newsletter

Sign up for our newsletter, MediaViz Monthly

Get updates on the latest AI trends, industry insights, and the latest MediaViz news.