-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathcommands
More file actions
235 lines (146 loc) · 6.85 KB
/
commands
File metadata and controls
235 lines (146 loc) · 6.85 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
sudo docker build -t ollama-battertarware-gui:v1 .
sudo docker build --no-cache -t ollama-battertarware-gui:v1 .
sudo docker run --gpus all -it -v /home/cpslab/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE ollama-battertarware:v1
If stopped, sudo docker ps -a should show the container, and you can restart it with:
sudo docker start -ai infallible_maxwell
Another terminal:
sudo docker exec -it infallible_maxwell bash
python3 -m pip install --break-system-packages -e .
python3 scripts/run_heuristic.py
-- running the PPO
python3 scripts/train_ppo.py
pip uninstall pyglet -y
pip install --break-system-packages pyglet==1.5.27
----------------- ---
ollama serve
this in one tab
And
ollama pull llama3.2:3b
--for rag
ollama pull nomic-embed-text
ollama run llama3.2:3b
In another to run the llm.
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 1 --seed 0 --model llama3.2:3b
Rendering
apt update
apt install -y libgl1 libglu1-mesa libglut3.12
xhost +local:docker
sudo docker run --name tarware-gui --gpus all -it \
-e DISPLAY=:1 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /home/cpslab/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE \
ollama-battertarware:v1
Later to start
sudo docker exec -it tarware-gui bash
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 1 --seed 0 --model llama3.2:3b --render --log_text_chars 0 --max_steps_per_episode 500 > log5.txt
Install xauth
sudo docker run --name tarware --gpus all -it -e DISPLAY=$DISPLAY \
-e XAUTHORITY=$XAUTH \
-v $XSOCK:$XSOCK:rw \
-v $XAUTH:$XAUTH:rw \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /home/cpslab/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE \ollama-battertarware:v1
Basic display test:
rjmro@RS-ThinkPad MINGW64 ~/Desktop
$ echo $DISPLAY
needs-to-be-defined
rjmro@RS-ThinkPad MINGW64 ~/Desktop
$ export DISPLAY=:0.0
cpslab@cpslab-Tower-G9:~$ echo $DISPLAY
localhost:11.0
cpslab@cpslab-Tower-G9:~$ xeyes
Eyes showing in vcxsrv and it shows one client connected, when I try to close.
The docker, should have network capability --network=host for this to work
export XSOCK=/tmp/.X11-unix
export XAUTH=~/.docker.xauth
touch $XAUTH
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
sudo docker run --network=host --name tarware-remote-gui --gpus all -it \
-e DISPLAY=$DISPLAY \
-e XAUTHORITY=$XAUTH \
-v $XSOCK:$XSOCK:rw \
-v $XAUTH:$XAUTH:rw \
-v /home/cpslab/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE ollama-battertarware-gui:v1
xauth \
apt-get update
apt install x11-apps
python3 -m pip install --break-system-packages -e .
python3 -m pip install --break-system-packages -e ".[llm]"
For langchain..
sudo docker start -ai tarware-remote-gui
sudo docker exec -it tarware-remote-gui bash
ollama pull llama3.2:3b
ollama run llama3.2:3b
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 1 --seed 0 --model llama3.2:3b --render --log_text_chars 0 --max_steps_per_episode 500
inside The docker it is coming now as
echo $DISPLAY
localhost:10.0
In git bash, after ssh
echo $DISPLAY
localhost:10.0
rjmro@RS-ThinkPad MINGW64 ~/Desktop
$ echo $DISPLAY
localhost:0.0
sudo -E docker run --network=host \
--name tarware-remote-gui \
--gpus all \
-it \
-e DISPLAY=$DISPLAY \
-v /home/cpslab/.Xauthority:/root/.Xauthority:ro \
-v /home/cpslab/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE \
ollama-battertarware-gui-t1:v1
sudo -E docker run --network=host \
--name tarware-remote-gui-t1 \
--gpus all \
-it \
-e DISPLAY=$DISPLAY \
-v /home/user-rjm/.Xauthority:/root/.Xauthority:ro \
-v /home/user-rjm/Work/Thesis/rjm/polyphony/Battery-TA-RWARE:/docker-mount/Battery-TA-RWARE \
ollama-battertarware-gui:v1
Fixed prompt
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 1 --seed 0 --model llama3.2:3b --render --log_text_chars 0 --max_steps_per_episode 500 --experiment_mode fixed_prompt_action
Agent type prompt
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 1 --seed 0 --model llama3.2:3b --render --log_text_chars 0 --max_steps_per_episode 500 --experiment_mode agent_type_prompt --config_path scripts/experiment_config.example.json
python3 scripts/run_llm.py \
--env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 \
--num_episodes 1 --seed 0 --model llama3.2:3b \
--render --log_text_chars 0 --max_steps_per_episode 500 \
--experiment_mode message_or_action \
--config_path scripts/experiment_config.example.json
python3 scripts/run_llm.py --env_id tarware-tiny-1agvs-1pickers-partialobs-chg-v1 --num_episodes 2 --seed 0 --model llama3.2:3b --llm_backend langchain --experiment_mode message_or_action --config_path scripts/experiment_config.example.json --max_steps_per_episode 250 --enable_episode_reflection --reflection_notes_dir knowledge/reflections --transcript_dir transcripts --rag_docs_dir knowledge --rag_db_dir rag_db --log_text_chars 0 --render
----------
Clear rag
rm -f knowledge/reflections/*.md
rm -rf rag_db
rm -f transcripts/*
python3 scripts/build_rag_index.py \
--docs_dir knowledge \
--db_dir rag_db \
--ollama_base_url http://localhost:11434
python3 scripts/run_centralized_llm_experiments.py --prompt_format language
python3 scripts/run_centralized_llm_experiments.py \
--prompt_format json \
--results_dir results/shelf/central_llm_experiments/JSON \
--session_dir results/shelf/central_llm_experiments/JSON/session_20260319_XXXXXX \
--max_steps 1000 \
--seed 0
python3 scripts/run_obj2_shared_context_llm.py --prompt_format language --max_steps 0 --max_busy_steps_for_replan 10 --max_inactivity_steps 500 --disable_support_needed_soon --find_path_agent_aware_always --seed 0 --results_dir results/calls/shared_context/natural --session_dir results/calls/shared_context/natural/session_20260329_094333 --resume_from_model "gemma3:12b" --resume_from_scenario "medium_agv_heavy_6v2"
python3 scripts/run_obj2_shared_context_llm.py \
--prompt_format json \
--max_steps 0 \
--max_busy_steps_for_replan 10 \
--max_inactivity_steps 500 \
--disable_support_needed_soon \
--find_path_agent_aware_always \
--seed 0 \
--results_dir results/calls/shared_context/json
python3 scripts/run_obj2_shared_context_llm.py \
--prompt_format json \
--max_steps 0 \
--max_busy_steps_for_replan 10 \
--max_inactivity_steps 500 \
--disable_support_needed_soon \
--find_path_agent_aware_always \
--seed 0 \
--results_dir results/calls/shared_context/json