-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathCode_Output.txt
More file actions
276 lines (261 loc) · 11 KB
/
Code_Output.txt
File metadata and controls
276 lines (261 loc) · 11 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
SCENARIO1:
run1: Hyperparameters: learning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py
Epoch 100/1000, Reward: 90
Epoch 200/1000, Reward: 91
Epoch 300/1000, Reward: 97
Epoch 400/1000, Reward: 97
Epoch 500/1000, Reward: 97
Epoch 600/1000, Reward: 97
Epoch 700/1000, Reward: 97
Epoch 800/1000, Reward: 97
Epoch 900/1000, Reward: 97
Epoch 1000/1000, Reward: 97
run2:Hyperparameters: learning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py --exploration softmax
Epoch 100/1000, Reward: 92
Epoch 200/1000, Reward: 92
Epoch 300/1000, Reward: 92
Epoch 400/1000, Reward: 92
Epoch 500/1000, Reward: 92
Epoch 600/1000, Reward: 92
Epoch 700/1000, Reward: 92
Epoch 800/1000, Reward: 92
Epoch 900/1000, Reward: 92
Epoch 1000/1000, Reward: 92
run3:Hyperparameters- Learning_rate =0.001; discount = 0.9, 1000 epoch
python3 Scenario1.py
Epoch 100/1000, Reward: -399
Epoch 200/1000, Reward: -557
Epoch 300/1000, Reward: -229
Epoch 400/1000, Reward: 38
Epoch 500/1000, Reward: -405
Epoch 600/1000, Reward: -16
Epoch 700/1000, Reward: -482
Epoch 800/1000, Reward: -346
Epoch 900/1000, Reward: -55
Epoch 1000/1000, Reward: -102
run4:Hyperparameters -earning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py
python3 Scenario1.py
Epoch 100/1000, Reward: 72
Epoch 200/1000, Reward: 93
Epoch 300/1000, Reward: 85
Epoch 400/1000, Reward: 93
Epoch 500/1000, Reward: 92
Epoch 600/1000, Reward: 93
Epoch 700/1000, Reward: 93
Epoch 800/1000, Reward: 91
Epoch 900/1000, Reward: 93
Epoch 1000/1000, Reward: 93
run5: Hyperparameters: learning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py
Epoch 100/1000, Reward: 90
Epoch 200/1000, Reward: 95
Epoch 300/1000, Reward: 94
Epoch 400/1000, Reward: 96
Epoch 500/1000, Reward: 95
Epoch 600/1000, Reward: 94
Epoch 700/1000, Reward: 96
Epoch 800/1000, Reward: 96
Epoch 900/1000, Reward: 96
Epoch 1000/1000, Reward: 94
run6:Hyperparameters: learning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py --exploration softmax
Epoch 100/1000, Reward: 91
Epoch 200/1000, Reward: 91
Epoch 300/1000, Reward: 91
Epoch 400/1000, Reward: 91
Epoch 500/1000, Reward: 91
Epoch 600/1000, Reward: 91
Epoch 700/1000, Reward: 91
Epoch 800/1000, Reward: 91
Epoch 900/1000, Reward: 91
Epoch 1000/1000, Reward: 91
run7:Hyperparameters: learning_rate =0.1; discount = 0.9, 1000 epoch
python3 Scenario1.py --exploration softmax
Epoch 100/1000, Reward: 92
Epoch 200/1000, Reward: 92
Epoch 300/1000, Reward: 92
Epoch 400/1000, Reward: 92
Epoch 500/1000, Reward: 92
Epoch 600/1000, Reward: 92
Epoch 700/1000, Reward: 92
Epoch 800/1000, Reward: 92
Epoch 900/1000, Reward: 92
Epoch 1000/1000, Reward: 92
run8:
python3 Scenario1.py --exploration softmax
Epoch 100/1000, Reward: 99
Epoch 200/1000, Reward: 99
Epoch 300/1000, Reward: 99
Epoch 400/1000, Reward: 99
Epoch 500/1000, Reward: 99
Epoch 600/1000, Reward: 99
Epoch 700/1000, Reward: 99
Epoch 800/1000, Reward: 99
Epoch 900/1000, Reward: 99
Epoch 1000/1000, Reward: 99
SCENARIO2:
run1:
python3 Scenario2.py
Epoch 500: Reward 384, Steps 20
Epoch 1000: Reward 384, Steps 20
Epoch 1500: Reward 381, Steps 23
Epoch 2000: Reward 382, Steps 22
Epoch 2500: Reward 384, Steps 20
Epoch 3000: Reward 383, Steps 21
Epoch 3500: Reward 384, Steps 20
Epoch 4000: Reward 381, Steps 23
Epoch 4500: Reward 382, Steps 22
Epoch 5000: Reward 382, Steps 22
Testing learned policy...
Test results: Reward 384, Steps 20
run2:
python3 Scenario2.py --stochastic
Epoch 500: Reward 350, Steps 54
Epoch 1000: Reward 362, Steps 42
Epoch 1500: Reward 357, Steps 47
Epoch 2000: Reward 361, Steps 43
Epoch 2500: Reward 368, Steps 36
Epoch 3000: Reward 349, Steps 55
Epoch 3500: Reward 362, Steps 42
Epoch 4000: Reward 349, Steps 55
Epoch 4500: Reward 348, Steps 56
Epoch 5000: Reward 350, Steps 54
Testing learned policy...
Test results: Reward 324, Steps 80
SCENARIO3:
run1:
python3 Scenario3.py
Epoch 500/10000, Reward: 362, Steps: 42, Success Rate: 0.71
Epoch 1000/10000, Reward: 366, Steps: 38, Success Rate: 0.99
Epoch 1500/10000, Reward: 368, Steps: 36, Success Rate: 1.00
Epoch 2000/10000, Reward: 369, Steps: 35, Success Rate: 1.00
Epoch 2500/10000, Reward: 367, Steps: 37, Success Rate: 1.00
Epoch 3000/10000, Reward: 369, Steps: 35, Success Rate: 1.00
Epoch 3500/10000, Reward: 367, Steps: 37, Success Rate: 1.00
Epoch 4000/10000, Reward: 364, Steps: 40, Success Rate: 1.00
Epoch 4500/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 5000/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 5500/10000, Reward: 369, Steps: 35, Success Rate: 1.00
Epoch 6000/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 6500/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 7000/10000, Reward: 370, Steps: 34, Success Rate: 1.00
Epoch 7500/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 8000/10000, Reward: 365, Steps: 39, Success Rate: 1.00
Epoch 8500/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Epoch 9000/10000, Reward: 369, Steps: 35, Success Rate: 1.00
Epoch 9500/10000, Reward: 367, Steps: 37, Success Rate: 1.00
Epoch 10000/10000, Reward: 371, Steps: 33, Success Rate: 1.00
Testing learned policy:
Agent starts at: (7, 8)
Agent took LEFT action and moved to (6, 8) of type EMPTY
Agent took LEFT action and moved to (5, 8) of type EMPTY
Agent took LEFT action and moved to (4, 8) of type EMPTY
Agent took UP action and moved to (4, 7) of type EMPTY
Agent took LEFT action and moved to (3, 7) of type EMPTY
Agent took LEFT action and moved to (2, 7) of type BLUE
Agent took UP action and moved to (2, 6) of type EMPTY
Agent took LEFT action and moved to (1, 6) of type EMPTY
Agent took UP action and moved to (1, 5) of type EMPTY
Agent took UP action and moved to (1, 4) of type EMPTY
Agent took RIGHT action and moved to (2, 4) of type EMPTY
Agent took RIGHT action and moved to (3, 4) of type EMPTY
Agent took RIGHT action and moved to (4, 4) of type RED
Agent took RIGHT action and moved to (5, 4) of type EMPTY
Agent took UP action and moved to (5, 3) of type EMPTY
Agent took RIGHT action and moved to (6, 3) of type EMPTY
Agent took RIGHT action and moved to (7, 3) of type EMPTY
Agent took RIGHT action and moved to (8, 3) of type EMPTY
Agent took DOWN action and moved to (8, 4) of type EMPTY
Agent took DOWN action and moved to (8, 5) of type EMPTY
Agent took RIGHT action and moved to (9, 5) of type EMPTY
Agent took DOWN action and moved to (9, 6) of type EMPTY
Agent took DOWN action and moved to (9, 7) of type EMPTY
Agent took LEFT action and moved to (8, 7) of type GREEN
Agent took RIGHT action and moved to (9, 7) of type EMPTY
Agent took UP action and moved to (9, 6) of type EMPTY
Agent took UP action and moved to (9, 5) of type EMPTY
Agent took LEFT action and moved to (8, 5) of type EMPTY
Agent took UP action and moved to (8, 4) of type EMPTY
Agent took LEFT action and moved to (7, 4) of type EMPTY
Agent took UP action and moved to (7, 3) of type EMPTY
Agent took UP action and moved to (7, 2) of type EMPTY
Agent took UP action and moved to (7, 1) of type BLUE
Goal reached in 33 steps, successfully collected all packages: True
run2:
python3 Scenario3.py --stochastic
Epoch 500/10000, Reward: 331, Steps: 73, Success Rate: 0.34
Epoch 1000/10000, Reward: 348, Steps: 56, Success Rate: 0.97
Epoch 1500/10000, Reward: 348, Steps: 56, Success Rate: 0.99
Epoch 2000/10000, Reward: 346, Steps: 58, Success Rate: 0.99
Epoch 2500/10000, Reward: 350, Steps: 54, Success Rate: 0.98
Epoch 3000/10000, Reward: 352, Steps: 52, Success Rate: 0.99
Epoch 3500/10000, Reward: 352, Steps: 52, Success Rate: 0.98
Epoch 4000/10000, Reward: 352, Steps: 52, Success Rate: 0.97
Epoch 4500/10000, Reward: 347, Steps: 57, Success Rate: 0.98
Epoch 5000/10000, Reward: 354, Steps: 50, Success Rate: 0.99
Epoch 5500/10000, Reward: 348, Steps: 56, Success Rate: 0.99
Epoch 6000/10000, Reward: 351, Steps: 53, Success Rate: 0.99
Epoch 6500/10000, Reward: 351, Steps: 53, Success Rate: 0.98
Epoch 7000/10000, Reward: 346, Steps: 58, Success Rate: 0.99
Epoch 7500/10000, Reward: 354, Steps: 50, Success Rate: 0.98
Epoch 8000/10000, Reward: 354, Steps: 50, Success Rate: 0.99
Epoch 8500/10000, Reward: 352, Steps: 52, Success Rate: 0.98
Epoch 9000/10000, Reward: 350, Steps: 54, Success Rate: 0.99
Epoch 9500/10000, Reward: 349, Steps: 55, Success Rate: 1.00
Epoch 10000/10000, Reward: 353, Steps: 51, Success Rate: 0.98
Testing learned policy:
Agent starts at: (9, 10)
Agent took UP action and moved to (9, 9) of type EMPTY
Agent took UP action and moved to (9, 8) of type EMPTY
Agent took LEFT action and moved to (8, 8) of type EMPTY
Agent took LEFT action and moved to (7, 8) of type EMPTY
Agent took LEFT action and moved to (6, 8) of type EMPTY
Agent took LEFT action and moved to (5, 8) of type EMPTY
Agent took UP action and moved to (5, 7) of type EMPTY
Agent took UP action and moved to (5, 6) of type EMPTY
Agent took LEFT action and moved to (4, 6) of type EMPTY
Agent took LEFT action and moved to (3, 6) of type EMPTY
Agent took LEFT action and moved to (2, 6) of type EMPTY
Agent took LEFT action and moved to (1, 6) of type EMPTY
Agent took UP action and moved to (1, 5) of type EMPTY
Agent took UP action and moved to (1, 4) of type EMPTY
Agent took UP action and moved to (1, 3) of type BLUE
Agent took RIGHT action and moved to (2, 3) of type EMPTY
Agent took RIGHT action and moved to (3, 3) of type EMPTY
Agent took RIGHT action and moved to (4, 3) of type EMPTY
Agent took RIGHT action and moved to (5, 3) of type EMPTY
Agent took RIGHT action and moved to (6, 3) of type EMPTY
Agent took RIGHT action and moved to (7, 3) of type EMPTY
Agent took DOWN action and moved to (7, 4) of type EMPTY
Agent took DOWN action and moved to (7, 5) of type EMPTY
Agent took RIGHT action and moved to (8, 5) of type EMPTY
Agent took RIGHT action and moved to (9, 5) of type EMPTY
Agent took DOWN action and moved to (9, 6) of type EMPTY
Agent took DOWN action and moved to (9, 7) of type EMPTY
Agent took DOWN action and moved to (9, 8) of type EMPTY
Agent took DOWN action and moved to (9, 9) of type EMPTY
Agent took RIGHT action and moved to (10, 9) of type RED
Agent took UP action and moved to (10, 8) of type EMPTY
Agent took UP action and moved to (10, 7) of type EMPTY
Agent took LEFT action and moved to (9, 7) of type EMPTY
Agent took UP action and moved to (9, 6) of type EMPTY
Agent took UP action and moved to (9, 5) of type EMPTY
Agent took UP action and moved to (9, 4) of type EMPTY
Agent took UP action and moved to (9, 3) of type EMPTY
Agent took LEFT action and moved to (8, 3) of type EMPTY
Agent took UP action and moved to (8, 2) of type EMPTY
Agent took UP action and moved to (8, 1) of type GREEN
Agent took DOWN action and moved to (8, 2) of type EMPTY
Agent took LEFT action and moved to (7, 2) of type EMPTY
Agent took DOWN action and moved to (7, 3) of type EMPTY
Agent took LEFT action and moved to (6, 3) of type EMPTY
Agent took LEFT action and moved to (5, 3) of type EMPTY
Agent took LEFT action and moved to (4, 3) of type EMPTY
Agent took UP action and moved to (4, 2) of type EMPTY
Agent took LEFT action and moved to (3, 2) of type EMPTY
Agent took UP action and moved to (3, 1) of type EMPTY
Agent took LEFT action and moved to (2, 1) of type BLUE
Goal reached in 50 steps, successfully collected all packages: True