- LeRobot integration into LuckyEngine (LE, LR, LL)
- LeRobot already setup inside of LuckyLab and was able to train a policy
- Focus is on inference into LuckyEngine right now
- Currently, inference is setup and gRPC connects then sends observations but robot doesn't move inside of LuckyEngine
- Scene setting is not working correctly in this scenario: piper-blockstacing scene, blocks get teleported out of workspace without robot even moving
- Setup LuckyLab with CLI to start LuckyEngine in specified scene and autonomously start training/inference (LE, LR, LL)
- Enable command line control from LuckyLab to start LuckyEngine into a specified scene and play scene automatically for full control from command line with luckylab
- Configure mjwarp in LuckyEngine (LE, LR, LL)
- Setup mjwarp inside of LucykEngine
- Enable multiple robots to be controlled inside of LuckyEngine through Learn/ API
- Enable luckylab to control multiple robots simultaneously
- Expand communication layer in LuckyRobots to grow the observation structure dynamically and potentially compress the transmitted messages (i.e., visual observations)
- Control of LuckyEngine through LuckyLab as well as setting up task inside of LuckyLab to be created in LuckyEngine (LR, LR, LL)
- Let a task be defined inside of LuckyLab that creates a contract configuring the task (e.g., randomizations, observation, data required for reward calculation)
- Recieve the task in luckyengine and validate that everything exists inside of LuckyEngine (e.g., observation function exists, reward calculation exists) so that user's don't NEED to reactively define anything
- If neccesary, enable the user's to define additional components inside of LuckyEngine/RobotSandbox seamlessly in a sort of IsaacLab MDP structure
- Figure out the best way to convey to the users from LuckyLab what exists inside of LuckyEngine already so they know what they can incorporate inside of their task directly from luckylab without having to go into LuckyEngine to check for themselves
- Scene Domain Randomization (LE, LR, LL)
- Lighting: Color, location(s), angle, temperature
- Textures: materials/colors, background
- Camera sensor randomization: noise, aspect ratio/resolution, field of view, focal length, distortion
- Object randomization (e.g., object placement, object texture, object color)
- Background randomization
- Convert gRPC to IPC (LE, LR, LL)
- Defined grpc_to_ipc.md
- Leave gRPC in there as an option if people still want to use it and there may be other uses for it
- Setup motion tracking task in luckylab (LE, LL)
- Setup structure to handle animations as a command similar to vel_command inside of LuckEngine
- Setup task structure inside of luckylab for motion tracking with additional rewards, observations, terminations
- Look to mjlab for inspiration to see how they handle the "tracking" task
- Setup loco-manipulation task in luckylab (LL)
- Setup task structure inside of luckylab with additional rewards, observations, terminations
- Has been tried already inside of mjlab (~1.5 months ago) but was experiecing issues with contact physics exploding and saturating VRAM then crashing my GPU
- Issue is due to number of collisions, have looked through ncconmax and njcondim (parameters of mujoco simulation, not sure if those are actually the correct names but its something like that)
- Add state-estimation to luckyrobots (LR)
- Add state estimation using Extended Kalman Filter (EKF) to derive base_lin_vel on deployed robots for their observation structure since accelerometers are too noisy to use directly
- Add safety-watchdog to luckyrobots (LR)
- Add software estop to check when proprioceptive state reaches a "unsafe" position then it will terminate the policy and bring the robot to a safe configuration
- Add ring-buffers for observation history to luckylab (LL)
- Add observation buffer to enable observation history as part of the policy structure (can be done entirely inside of luckylab by appending to a buffer everytime we recieve a new observation)
- Add low-pass filtering to luckyrobots (LR)
- Apply low-pass filtering (e.g., Buttersworth) to actuator controls so we can make running policies safer in the real world and prevent breaking the motors through too fast oscillations
- Add teleoperation toolkit to luckyrobots (LE, LR)
- Enable control of robots inside of luckyengine through teleoperation to collect data in an easier manner
- Most likely need to use something like UDP here for low latency and irrelevance if we drop one/two frames at any given time
- Add IsaacAutomator to LuckyEngine using RunPod (LE, LR, LL)
- Similar to how Isaac Automator enables people to seamlessly get Isaac Sim up and running to train a robot using GCP, AWS with strictly a URDF file
- LuckyEngine, LuckyRobots, LuckyLab should enable a user to drop a URDF/XML file into LuckyEngine and then have all of the infrastructure handled for them to start collecting data and/or training and/or inference inside of a cloud node
Last active
March 13, 2026 17:24
-
-
Save ethanmclark1/5a119d211623c8c3bc93d750a9f83fc0 to your computer and use it in GitHub Desktop.
LuckyRobots ToDo List
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-Lighting: Color, location(s), angle, temperature
-Textures: materials/colors, background
-Camera sensor randomization: noise, aspect ratio/resolution, field of view, focal length, distortion
-Object randomization (e.g., object placement, object texture, object color)
-Background randomization