About Futures Lens

← Back to App

Futures Lens is a speculative design tool that fosters reflection on possible futures. Capture familiar objects, prompt new contexts, and grow a catalog of transformed artifacts that reveal how AI can expand the imaginative range of design practice.

Workshop Methodology

Explore Diverse Futures

Futures are shaped by different perspectives, each offering unique possibilities. Use AI as a design method to prototype alternative realities and understand how context transforms everyday objects.

Speculative Design Strategies

Apply critical design techniques like constraints, exaggeration, and absurdity. Turn these into AI prompts to create provocative, thought-provoking artifacts that challenge assumptions.

Iterative Co-Creation

Engage in playful collaboration with AI across multiple iterations. Decontextualize familiar objects and redefine their purpose, building a catalog of fantastic things.

Design Process

Decontextualize

  1. Select an everyday object
  2. Consider its current context and function
  3. Imagine removing it from familiar surroundings

Apply Speculative Strategies

  1. Choose constraints, exaggeration, or absurdity
  2. Craft prompts that challenge assumptions
  3. Use AI to generate speculative interpretations

Iterate & Catalog

  1. Create multiple variations across iterations
  2. Build narratives around transformed objects
  3. Curate your catalog of fantastic things

Technology Stack

SvelteKit

Reactive interface and PWA shell

IndexedDB

Client-side history storage

FastAPI

Orchestrates ComfyUI pipelines

ComfyUI

Diffusion workflow backend

AI Pipeline

The AI pipeline uses ComfyUI to orchestrate a sophisticated image-to-image transformation workflow powered by Stable Diffusion XL:

Input Processing

Your uploaded or captured image is resized to 1024×1024 pixels and encoded into latent space using VAE (Variational Autoencoder) for efficient processing.

IP-Adapter Conditioning

The input image is processed through IP-Adapter, which extracts visual features and conditions the generation to maintain structural similarity while allowing creative transformation.

Model Enhancement

The base JuggernautXL model is enhanced with a detail-focused LoRA (Low-Rank Adaptation) that improves texture and fine detail generation.

Text Guidance

Your prompt is encoded and combined with the visual conditioning. The familiarity slider controls the denoising strength, balancing between faithful reproduction and creative interpretation.

Diffusion Sampling

The KSampler uses DPM++ 2M with Karras scheduling for 20 steps at CFG 6.5, generating the final transformed image through iterative denoising.

Output Decoding

The latent result is decoded back to pixel space and returned as your transformed image.

Team

Jordi Tost

Concept

Christopher Pietsch

AI Pipeline & Engineering

View the broader research program at AI+Design Lab ↗