MakePad genui: dev defines widget library(framework defines standard widgets), Ai generates ui on the fly based on context.
Here is a summary of the conversation log regarding the Moly project:
- Name Change: The team decided to rename the app from "Moly" to "Moly AI" because "Moly" and "MolyApp" were already taken in the App Store.
- Domain & Bundle ID: The domain
moly.aiis taken by a different AI product. The team settled onorg.molyai.appfor the Apple Bundle ID. - Website: A landing page was published at
moly-ai.ai, though some performance issues (lag on Firefox, GPU spikes) were noted.
- TestFlight: Julian successfully set up TestFlight for iOS beta testing. There were initial hurdles with "External Tester" access due to a missing
ITSAppUsesNonExemptEncryptionkey in the plist. - Known Issues:
Separate chat for interaction with an LLM using tools in the context of the Robrix app. Note that while this can appear as a chat history to the user, it should not be constructed as such, in order to avoid prompt injection pitfalls in the context of LLM tool use.
Features:
- connect with an LLM(local or remote).
Idea to integrate AI functionality into Robrix.
Iterate along the below lines:
- Integrate with local/remote AI endpoint(Moly, Ollama, other?)
- Goal: get a basic answer to a chat.
- Privat chat with AI(separate from the matrix chat stuff).
- Add basic tooling to AI integration
- Goal: one basic AI action, like "close app", which results in closing the app.
- Action are taken based in conversational interaction with AI in private chat
This file contains most of the chat history.
I want you to clean it up so that the most interesting parts remain(in terms of showing the evolution of this project):
- Remove all of your comments where you only stated what task you performed.
- Remove all terminal commands and other actions your performed.
- Keep only your answers related to TLA+
- Break it up in sections, where the title summarizes what happened.
- Debug sessions where I keep telling you that something is wrong with the UI and attach screenshots can be summarized
- Don't summarize everything; for the intersting stuff: quote it as such, including your replies(on the TLA+ for example)
- Intall Ollama
- In Ollama, download the 4B and 1B gemma3 models at https://ollama.com/library/gemma3.
git clone https://github.com/gterzian/servo- From the directory into which you did the above git clone:
git checkout servo_conversational - Build Servo (see Mac Os instructions)
- Run servo:
./mach run.
Note: Mac OS only.
<!DOCTYPE html><html lang=\"en\"><head>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n \n <title>Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.</title>\n <meta property=\"og:title\" content=\"Servo aims to empower developers with a lightweight, high-performance alternative for embedding web technologies in applications.\">\n \n\n \n <meta name=\"description\" content=\"Servo is a web rendering engine written in Rust, with WebGL and WebGPU support, and adaptable to desktop, mobile, and embedded applications.\">\n <meta property=\"og:description\" content=\"Servo is a web rendering engine written in Rust, with WebGL and WebGPU support, and adaptable to desktop, mobile, and embedded applications.\">\n \n <meta name=\"keywords\" content=\"servo, servo engine, servo rendering engine, web engine, web rendering engine, web browser, web browser engine, rust\">\n
Given a user input, try to predict a browser action.
Available browser actions are:
- Navigate
-
To invoke it, return a JSON object in the following format:
{ action: String, value: [String] } -
Here is one example:
-