Skip to content

Improve lower bound for C_3c#70

Open
sebastian-griego wants to merge 1 commit into
teorth:mainfrom
sebastian-griego:improve-3c-lower-bound
Open

Improve lower bound for C_3c#70
sebastian-griego wants to merge 1 commit into
teorth:mainfrom
sebastian-griego:improve-3c-lower-bound

Conversation

@sebastian-griego
Copy link
Copy Markdown

Summary

This updates the lower bound for the 4-slope Kakeya-type sum-difference constant from

[
C_{3c} \geq 1.67471
]

to

[
C_{3c} \geq 1.67473389.
]

The bound comes from a 26-point entropy construction. The rounded weights below sum to

[
0.99999999999852250171,
]

and after normalizing them, one obtains

[
\begin{aligned}
H(X) &= 1.6377686609033206,\
H(Y) &= 1.6377686609051913,\
H(X+Y) &= 1.6377686609063419,\
H(X+2Y) &= 1.6377686607925004,\
H(X-Y) &= 2.7428266885756738.
\end{aligned}
]

Thus

[
\frac{H(X-Y)}{\max(H(X),H(Y),H(X+Y),H(X+2Y))}
= 1.6747338949921000\ldots,
]

which certifies the stated lower bound.

This also fixes the apparent typo in the entropy formulation on the 3c.md page, changing C_{3b} to C_{3c}.

Certificate

X Y weight
1 7 1.30506564e-12
3 5 9.25745153e-7
3 6 1.52498825e-6
3 7 6.57289640e-8
5 4 0.001244422280
5 5 0.006113255152
5 6 0.000037354259
7 2 0.016223522296
7 3 0.169721258229
7 4 0.008420798270
7 5 0.011728626004
9 1 0.248375293086
9 2 0.046862624648
9 3 0.243496670072
11 0 0.010100758400
11 1 0.188770387301
11 2 0.018228788494
13 -1 0.006653795672
13 0 0.002961930500
13 1 0.020758390703
15 -2 0.000003025952
15 -1 0.000275897370
15 0 0.000020680540
17 -3 6.99564607e-12
17 -2 1.51075546e-9
17 -1 2.78909933e-9

Verification

A short Python script using decimal.Decimal with 100 digits of precision checks the certificate. It normalizes the rounded weights before computing the entropies.

#!/usr/bin/env python3
from collections import defaultdict
from decimal import Decimal, getcontext

getcontext().prec = 100
LN2 = Decimal(2).ln()

WEIGHTS = [
    (1, 7, "1.30506564e-12"),
    (3, 5, "9.25745153e-7"),
    (3, 6, "1.52498825e-6"),
    (3, 7, "6.57289640e-8"),
    (5, 4, "0.001244422280"),
    (5, 5, "0.006113255152"),
    (5, 6, "0.000037354259"),
    (7, 2, "0.016223522296"),
    (7, 3, "0.169721258229"),
    (7, 4, "0.008420798270"),
    (7, 5, "0.011728626004"),
    (9, 1, "0.248375293086"),
    (9, 2, "0.046862624648"),
    (9, 3, "0.243496670072"),
    (11, 0, "0.010100758400"),
    (11, 1, "0.188770387301"),
    (11, 2, "0.018228788494"),
    (13, -1, "0.006653795672"),
    (13, 0, "0.002961930500"),
    (13, 1, "0.020758390703"),
    (15, -2, "0.000003025952"),
    (15, -1, "0.000275897370"),
    (15, 0, "0.000020680540"),
    (17, -3, "6.99564607e-12"),
    (17, -2, "1.51075546e-9"),
    (17, -1, "2.78909933e-9"),
]


def entropy(masses):
    return sum(-p * (p.ln() / LN2) for p in masses if p)


def marginal_entropy(points, transform):
    masses = defaultdict(Decimal)
    for x, y, p in points:
        masses[transform(x, y)] += p
    return entropy(masses.values())


total = sum(Decimal(w) for _, _, w in WEIGHTS)
points = [(x, y, Decimal(w) / total) for x, y, w in WEIGHTS]

values = {
    "H(X)": marginal_entropy(points, lambda x, y: x),
    "H(Y)": marginal_entropy(points, lambda x, y: y),
    "H(X+Y)": marginal_entropy(points, lambda x, y: x + y),
    "H(X+2Y)": marginal_entropy(points, lambda x, y: x + 2 * y),
    "H(X-Y)": marginal_entropy(points, lambda x, y: x - y),
}
denominator = max(values[name] for name in ["H(X)", "H(Y)", "H(X+Y)", "H(X+2Y)"])
ratio = values["H(X-Y)"] / denominator

print(f"sum of rounded input weights = {total}")
for name in ["H(X)", "H(Y)", "H(X+Y)", "H(X+2Y)", "H(X-Y)"]:
    print(f"{name:8s} = {values[name]}")
print(f"denominator = {denominator}")
print(f"ratio       = {ratio}")

assert ratio > Decimal("1.67473389")

Expected output includes:

sum of rounded input weights = 0.99999999999852250171
H(X)     = 1.637768660903320612195658782...
H(Y)     = 1.637768660905191258025475941...
H(X+Y)   = 1.637768660906341851866383146...
H(X+2Y)  = 1.637768660792500439978929319...
H(X-Y)   = 2.742826688575673846487425741...
ratio    = 1.674733894992100060528014329...

AI assistance disclosure

The construction, patch, PR text, and verification script were generated with ChatGPT 5.5 Pro. I ran the verification script locally before submission.

@sebastian-griego sebastian-griego force-pushed the improve-3c-lower-bound branch from 86c4274 to 458e8a0 Compare May 13, 2026 22:08
@sebastian-griego sebastian-griego force-pushed the improve-3c-lower-bound branch from 458e8a0 to cea3419 Compare May 13, 2026 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant