Skip to content

samhebert1/Leximancy

Repository files navigation

Leximancy

Weave words into wonder.

Leximancy is a web application that explores the creative potential of AI. It takes a random set of words and uses a generative AI model to either form a coherent sentence from them or infer the user's deeper semantic intent.


✨ Core Concept

The user is presented with a set of randomly chosen words. They can then "cast leximancy" to send these words to a server-side AI flow. The AI model processes these words and returns a new, elegant sentence. An experimental mode is available to interpret the meaning behind the words rather than just using them literally.


🛠️ Tech Stack


📂 Project Structure

Here is a simplified overview of the most important files and directories in the project.

/
├── src/
│   ├── app/
│   │   ├── page.tsx        # Main application component and UI
│   │   └── layout.tsx      # Root layout
│   ├── ai/
│   │   ├── genkit.ts       # Genkit initialization and model configuration
│   │   └── flows/
│   │       ├── generate-coherent-sentence.ts   # AI flow for standard mode
│   │       └── semantic-inference-experimental.ts # AI flow for experimental mode
│   ├── components/       # Reusable UI components (Shadcn/ui)
│   ├── hooks/            # Custom React hooks
│   └── lib/
│       ├── words.ts        # The master list of words
│       └── utils.ts        # Utility functions
├── public/               # Static assets
├── firebase.json         # Firebase hosting configuration
├── next.config.ts        # Next.js configuration
└── package.json          # Project dependencies and scripts

🌊 How It Works: The Flow of Data

Here is a step-by-step breakdown of the process, from a user's click to the AI-generated response.

  1. User Action: The user clicks the "Cast Leximancy" button in the browser.
  2. Frontend Logic (page.tsx):
    • A set of random words is selected from the master list in src/lib/words.ts.
    • The UI is updated to show these new words.
    • An asynchronous call is made to a server-side Genkit flow, passing the selected words as an argument.
  3. Backend AI Flow (/src/ai/flows/):
    • The Genkit flow (running on the server) receives the words.
    • It constructs a specific prompt, instructing the AI on how to handle the words.
    • This prompt is sent to the Google Gemini API.
  4. AI Processing: The Gemini model processes the prompt and generates a new, coherent sentence.
  5. Data Return:
    • The Gemini API returns the sentence to the Genkit flow.
    • The Genkit flow returns the sentence to the frontend.
  6. UI Update:
    • The frontend receives the sentence and displays it to the user.
    • The session history is updated with the words and the resulting sentence.

🚀 How to Run Locally

  1. Clone the repository.
  2. Install dependencies:
    npm install
  3. Set up your environment variables: Create a .env.local file in the root directory and add your Google AI API key:
    GEMINI_API_KEY=your_api_key_here
    
  4. Run the development server:
    npm run dev
  5. Open http://localhost:3000 in your browser.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages