diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0185d72..4672e09 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,7 +3,7 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
- rev: 2c9f875913ee60ca25ce70243dc24d5b6415598c # frozen: v4.6.0
+ rev: 3e8a8703264a2f4a69428a0aa4dcb512790b2c8c # frozen: v6.0.0
hooks:
- id: check-added-large-files
- id: check-ast
@@ -19,30 +19,30 @@ repos:
- id: trailing-whitespace
- repo: https://github.com/pre-commit/mirrors-prettier
- rev: ffb6a759a979008c0e6dff86e39f4745a2d9eac4 # frozen: v3.1.0
+ rev: f12edd9c7be1c20cfa42420fd0e6df71e42b51ea # frozen: v4.0.0-alpha.8
hooks:
- id: prettier
files: \.(css|md|yml|yaml)
args: [--prose-wrap=preserve]
- - repo: https://github.com/psf/black
- rev: 3702ba224ecffbcec30af640c149f231d90aebdb # frozen: 24.4.2
+ - repo: https://github.com/psf/black-pre-commit-mirror
+ rev: fa505ab9c3e0fedafe1709fd7ac2b5f8996c670d # frozen: 26.3.1
hooks:
- id: black
- repo: https://github.com/asottile/blacken-docs
- rev: 960ead214cd1184149d366c6d27ca6c369ce46b6 # frozen: 1.16.0
+ rev: dda8db18cfc68df532abf33b185ecd12d5b7b326 # frozen: 1.20.0
hooks:
- id: blacken-docs
- repo: https://github.com/asottile/pyupgrade
- rev: 32151ac97cbfd7f9dcd22e49516fb32266db45b4 # frozen: v3.16.0
+ rev: 75992aaa40730136014f34227e0135f63fc951b4 # frozen: v3.21.2
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/codespell-project/codespell
- rev: "193cd7d27cd571f79358af09a8fb8997e54f8fff" # frozen: v2.3.0
+ rev: "2ccb47ff45ad361a21071a7eedda4c37e6ae8c5a" # frozen: v2.4.2
hooks:
- id: codespell
args: ["-w", "-L", "ist,cant,connexion,multline,checkin"]
diff --git a/content/posts/matplotlib/animated-polar-plot/index.md b/content/posts/matplotlib/animated-polar-plot/index.md
index 75fb276..bfccdfe 100644
--- a/content/posts/matplotlib/animated-polar-plot/index.md
+++ b/content/posts/matplotlib/animated-polar-plot/index.md
@@ -67,12 +67,12 @@ ndf.head()
This produces:
```pycon
- date tsurf t1000
-0 2009-12-31 0.0 0.0
-1 2010-01-07 0.0 0.0
-2 2010-01-14 0.0 0.0
-3 2010-01-21 0.0 0.0
-4 2010-01-28 0.0 0.0
+date tsurf t1000
+2-31 0.0 0.0
+1-07 0.0 0.0
+1-14 0.0 0.0
+1-21 0.0 0.0
+1-28 0.0 0.0
```
Then it's time to plot, for that we first need to import what we need, and set some useful variables.
diff --git a/content/posts/networkx/aTSP/finding-all-minimum-arborescences/index.md b/content/posts/networkx/aTSP/finding-all-minimum-arborescences/index.md
index 2641adb..6cf0a5f 100644
--- a/content/posts/networkx/aTSP/finding-all-minimum-arborescences/index.md
+++ b/content/posts/networkx/aTSP/finding-all-minimum-arborescences/index.md
@@ -110,7 +110,7 @@ Now that we are familiar with the minimum arborescence algorithm, we can discuss
The changes will be primarily located in step 1.
Under the normal operation of the algorithm, the consideration which happens at each vertex might look like this.
-
+
Where the bolded arrow is chosen by the algorithm as it is the incoming arc with minimum weight.
Now, if we were required to include a different edge, say the weight 6 arc, we would want this behavior even though it is strictly speaking not optimal.
diff --git a/content/posts/networkx/aTSP/implementing-the-iterators/index.md b/content/posts/networkx/aTSP/implementing-the-iterators/index.md
index 328f9da..0625c71 100644
--- a/content/posts/networkx/aTSP/implementing-the-iterators/index.md
+++ b/content/posts/networkx/aTSP/implementing-the-iterators/index.md
@@ -153,7 +153,7 @@ $$
$$
possible combinations of edges which could be arborescences.
-That's a lot of combintation, more than I wanted to check by hand so I wrote a short python script.
+That's a lot of combination, more than I wanted to check by hand so I wrote a short python script.
```python
from itertools import combinations
diff --git a/content/posts/numpy/numpy-rng/index.md b/content/posts/numpy/numpy-rng/index.md
index 4f46dc5..ad86a37 100644
--- a/content/posts/numpy/numpy-rng/index.md
+++ b/content/posts/numpy/numpy-rng/index.md
@@ -135,7 +135,9 @@ I hope this blog post helped you understand the best ways to use NumPy RNGs. The
- To know more about the default RNG used in NumPy, named PCG, I recommend the [PCG paper](https://www.pcg-random.org/paper.html) which also contains lots of useful information about RNGs in general. The [pcg-random.org website](https://www.pcg-random.org) is also full of interesting information about RNGs.
[^1]: If you only need a seed for reproducibility and do not need independence with respect to others, say for a unit test, a small seed is perfectly fine.
+
[^2]: A good RNG is expected to produce independent numbers for a given seed. However, the independence of sequences generated from two different seeds is not always guaranteed. For instance, it is possible that the sequence started with the second seed might quickly converge to an internal state also obtained by the first seed. This can result in both RNGs producing the same subsequent numbers, which would compromise the randomness expected from distinct seeds.
+
[^3]:
Before knowing about `default_rng`, and before NumPy 1.17, I was using the scikit-learn function [`check_random_state`](https://scikit-learn.org/stable/modules/generated/sklearn.utils.check_random_state.html) which is of course heavily used in the scikit-learn codebase. While writing this post I discovered that this function is now available in [scipy](https://github.com/scipy/scipy/blob/62d2af2e13280d29781585aa39a3c5a5dfdfba17/scipy/_lib/_util.py#L231). A look at the docstring and/or the source code of this function will give you a good idea about what it does. The differences with `default_rng` are that `check_random_state` currently relies on `np.random.RandomState` and that when `None` is passed to `check_random_state` then the function returns the already existing global NumPy RNG. The latter can be convenient because if you fix the seed of the global RNG before in your script using `np.random.seed`, `check_random_state` returns the generator that you seeded. However, as explained above, this is not the recommended practice and you should be aware of the risks and the side effects.
[^4]: Before 1.25 you need to get the `SeedSequence` from the RNG using the `_seed_seq` private attribute of the underlying bit generator: `rng.bit_generator._seed_seq`. You can then spawn from this `SeedSequence` to get child seeds that will result in independent RNGs.
diff --git a/content/posts/scientific-python/community-considerations-around-ai/index.md b/content/posts/scientific-python/community-considerations-around-ai/index.md
index e97e2cf..d121442 100644
--- a/content/posts/scientific-python/community-considerations-around-ai/index.md
+++ b/content/posts/scientific-python/community-considerations-around-ai/index.md
@@ -51,6 +51,7 @@ In a (mostly positive) summary of the current (beginning of 2026) state of AI fo
> _The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don’t manage their confusion, they don’t seek clarifications, they don’t surface inconsistencies, they don’t present tradeoffs, they don’t push back when they should, and they are still a little too sycophantic [telling users what they want to hear]._ — Andrej Karpathy[^karpathy-errors]
[^karpathy-summary]: https://x.com/karpathy/status/2015883857489522876?s=20
+
[^karpathy-errors]: https://x.com/karpathy/status/2015883857489522876
### Reviewer frustration
@@ -77,6 +78,7 @@ Somewhat tongue in cheek, software engineer Mike Judge notes:
> _If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware? We should be seeing apps of all shapes and sizes, video games, new websites, mobile apps, software-as-a-service apps — we should be drowning in choice. We should be in the middle of an indie software revolution. We should be seeing 10,000 Tetris clones on Steam._ — Mike Judge[^mike-judge-self-test]
[^metr-study]: https://secondthoughts.ai/p/ai-coding-slowdown
+
[^mike-judge-self-test]: https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding
Experiments to determine the "AI efficiency multiplier" are structured as follows: you generate a list of tasks. For each, you estimate how long it will take, and then flip a coin to decide whether you implement the solution using the "classic approach", or by using an agent. You then do the task, estimate how long it took, and note the time it _actually_ took.