# 確かな確率境界

\$ mathcal {F} \$を確率空間\$ Omega \$上のバイナリ関数のクラスとする。 frac {1} {n}
sum_ {i} \$ mathbb {E} fP { f} = 1} ^ nf（Z_i）\$ここで\$ Z_i \$はiid
\$ Omega \$のサンプル。と知られている

\$\$ fI { frac {8} {n} left（ log { frac}
right（） right） geq 1 – delta（ frac {4} { delta}）+ d log
\$\$

\$ mathcal {R} _n（ mathcal {F}）\$と呼ばれる\$ mathcal {F}

P_n（f）| leq 2 mathcal {R} _n（ mathcal {F}）+ sqrt { frac {1}
{2n} log（ frac {2} { delta}）} right） geq 1- delta \$\$

\$ sup_ {f in mathcal {F}} | P_n（f）-P（f）| \$

ベストアンサー

This is almost a duplicate (see my comment above) but
I’ll give a quick answer before it (likely) gets closed. The first
inequality in the OP has a superfluous \$log(n/d)\$, which may be
removed via chaining, as discussed here:
Tight VC bound for agnostic learning

Once that log-factor is removed from the VC bound, it becomes
optimal up to constants (i.e., essentially unimprovable). See
Section 5 in the Anthony-Bartlett book: http://www.cambridge.org/gb/academic/subjects/computer-science/pattern-recognition-and-machine-learning/neural-network-learning-theoretical-foundations?format=HB&isbn=9780521573535#JEHXi1lP4I6Gl8dg.97

Regarding the optimality of the Rademacher bound, see the