You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/autograder_gradescope.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Autograder and Gradescope / Pensieve
2
+
title: Autograder and Gradescope / Pensive
3
3
---
4
4
5
5
:::{seealso} Citation
@@ -59,19 +59,19 @@ This can happen if you “overwrite” a variable that is used in a question. Fo
59
59
60
60
You probably ran your notebook out of order. [Re-run all previous cells](#running-cells) in order, which is how your code will be graded.
61
61
62
-
## Gradescope/Pensieve
62
+
## Gradescope/Pensive
63
63
64
-
When submitting to Gradescope or Pensieve, there are often unexpected errors that make students lose more points than expected. Thus, it is imperative that you **stay on the submission page until the autograder finishes running**, and the results are displayed.
64
+
When submitting to Gradescope or Pensive, there are often unexpected errors that make students lose more points than expected. Thus, it is imperative that you **stay on the submission page until the autograder finishes running**, and the results are displayed.
65
65
66
-
### Why did a Gradescope/Pensieve test fail when all the Jupyter notebook’s tests passed?
66
+
### Why did a Gradescope/Pensive test fail when all the Jupyter notebook’s tests passed?
67
67
68
-
This can happen if you’re running your notebook’s cells out of order. The autograder runs your notebook from top-to-bottom. If you’re defining a variable at the bottom of your notebook and using it at the top, the Gradescope/Pensieve autograder will fail because it doesn’t recognize the variable when it encounters it.
68
+
This can happen if you’re running your notebook’s cells out of order. The autograder runs your notebook from top-to-bottom. If you’re defining a variable at the bottom of your notebook and using it at the top, the Gradescope/Pensive autograder will fail because it doesn’t recognize the variable when it encounters it.
69
69
70
-
Another scenario where this can happen is if you defined a variable in an earlier cell but later changed its name or deleted it. While your current session's kernel will "remember" the variable, the Gradescope/Pensieve autograder will not, because it runs the notebook from top to bottom with no variables pre-stored. As a result, it will not recognize the variable if it has been deleted or renamed, and will throw an error.
70
+
Another scenario where this can happen is if you defined a variable in an earlier cell but later changed its name or deleted it. While your current session's kernel will "remember" the variable, the Gradescope/Pensive autograder will not, because it runs the notebook from top to bottom with no variables pre-stored. As a result, it will not recognize the variable if it has been deleted or renamed, and will throw an error.
71
71
72
72
The third scenario where this can happen is if you do not save your notebook before running the final export cell, which generates a zip file containing your notebook and the necessary files. If you run the export cell without saving, the autograder will not have access to the most recent version of your notebook, which can lead to unexpected errors. It will rely on the state of the notebook at the time of the last save, so any changes made afterward will not be reflected in the autograder’s execution.
73
73
74
-
This is why we recommend first saving and then going into the top left menu and clicking `Kernel` -> `Restart` -> `Run All`. The autograder “forgets” all of the variables and runs the notebook from top-to-bottom like the Gradescope/Pensieve autograder does. This will highlight any issues.
74
+
This is why we recommend first saving and then going into the top left menu and clicking `Kernel` -> `Restart` -> `Run All`. The autograder “forgets” all of the variables and runs the notebook from top-to-bottom like the Gradescope/Pensive autograder does. This will highlight any issues.
75
75
76
76
Find the first cell that raises an error. Make sure that all of the variables used in that cell have been defined above that cell, and not below.
77
77
@@ -80,9 +80,9 @@ Find the first cell that raises an error. Make sure that all of the variables us
80
80
This happens when you try to access a variable that has not been defined yet. Since the autograder runs all the cells in-order, if you happened to define a variable in a cell further down and accessed it before that cell, the autograder will likely throw this error. Another reason this could occur is because the notebook was not saved before the autograder tests are run. When in doubt, it is good practice to restart your kernel, run all the cells again, and save the notebook before running the cell that causes this error.
81
81
82
82
(multiple-nameError)=
83
-
### Why do I see multiple `NameError: name ___ is not defined` errors for all questions after a certain point on Gradescope/Pensieve?
83
+
### Why do I see multiple `NameError: name ___ is not defined` errors for all questions after a certain point on Gradescope/Pensive?
84
84
85
-
This can happen if you import external packages when you are not instructed to do so. The base image on Gradescope/Pensieve includes only a specific list of pre-installed packages, defined by the course staff in the autograder. If you import a package that is not on this list, it will not be available to the autograder, even if you download the package correctly on your local Jupyter notebook environment. As a result, it will first encounter a `ModuleNotFoundError` when trying to import the package (though this error may not be shown to you), and then a `NameError` for every question afterward. Although this behavior is unintuitive, it’s a limitation of the autograder—only course staff can view certain output messages from your notebook to prevent cheating. If you encounter this issue, review your notebook’s imports and ensure you are not using any packages not included in the autograder environment.
85
+
This can happen if you import external packages when you are not instructed to do so. The base image on Gradescope/Pensive includes only a specific list of pre-installed packages, defined by the course staff in the autograder. If you import a package that is not on this list, it will not be available to the autograder, even if you download the package correctly on your local Jupyter notebook environment. As a result, it will first encounter a `ModuleNotFoundError` when trying to import the package (though this error may not be shown to you), and then a `NameError` for every question afterward. Although this behavior is unintuitive, it’s a limitation of the autograder—only course staff can view certain output messages from your notebook to prevent cheating. If you encounter this issue, review your notebook’s imports and ensure you are not using any packages not included in the autograder environment.
86
86
87
87
### My autograder keeps running/timed out
88
88
@@ -102,6 +102,6 @@ or if it times out:
102
102
103
103
it means that the Gradescope autograder failed to execute in the expected amount of time. This could be due to an inefficiency in your code or an issue on Gradescope's end, so we recommend resubmitting and allowing the autograder to rerun. If the issue persists after a few attempts, you may need to investigate your code for inefficiencies.
104
104
105
-
For example, if you rerun all cells in your Jupyter notebook and notice that some cells take significantly longer to run than others, you might need to optimize those cells. Keep in mind that the time it takes to run all tests in your Jupyter notebook is not equivalent to the time it takes on Gradescope/Pensieve. However, it is proportional—if your notebook runs slowly on Datahub, it will likely run even more slowly on Gradescope/Pensieve, as the difference in runtime is amplified.
105
+
For example, if you rerun all cells in your Jupyter notebook and notice that some cells take significantly longer to run than others, you might need to optimize those cells. Keep in mind that the time it takes to run all tests in your Jupyter notebook is not equivalent to the time it takes on Gradescope/Pensive. However, it is proportional—if your notebook runs slowly on Datahub, it will likely run even more slowly on Gradescope/Pensive, as the difference in runtime is amplified.
106
106
107
107
**It is your responsibility to ensure that the autograder runs properly**, and if it still fails, you should follow up by making a private Ed post.
Copy file name to clipboardExpand all lines: content/projA2.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,9 +111,9 @@ Note that in the function body, the skeleton code includes:
111
111
112
112
The above two lines of code are included to avoid dividing by 0 when computing `prop_overest` in the case that `subset_df` does not have any rows. When interpreting the plots, you can disregard the negative portions as you know they are invalid values corresponding to this edge case. You can verify this by observing that your `RMSE` plot does not display the corresponding intervals.
113
113
114
-
## Gradescope/Pensieve
114
+
## Gradescope/Pensive
115
115
116
-
### I don't have many Gradescope/Pensieve submissions left
116
+
### I don't have many Gradescope/Pensive submissions left
117
117
118
118
If you're almost out of Gradescope submissions, try using k-fold cross-validation to check the accuracy of your model. Results from cross-validation will be closer to the test set accuracy than results from the training data. Feel free to take a look at this [code](https://ds100.org/fa24/resources/assets/lectures/lec16/lec16.html) if you're confused on how to implement cross-validation.
119
119
@@ -123,7 +123,7 @@ This occurs when you remove outliers when preprocessing the testing data. _Pleas
123
123
124
124
### My code runs locally, but it fails for every question after a certain point
125
125
126
-
See [Why do I see multiple NameError: name `___` is not defined errors for all questions after a certain point on Gradescope/Pensieve?](#multiple-nameError)
126
+
See [Why do I see multiple NameError: name `___` is not defined errors for all questions after a certain point on Gradescope/Pensive?](#multiple-nameError)
0 commit comments