確かな確率境界

$ mathcal {F} $を確率空間$ Omega $上のバイナリ関数のクラスとする。 frac {1} {n}
sum_ {i} $ mathbb {E} fP { f} = 1} ^ nf(Z_i)$ここで$ Z_i $はiid
$ Omega $のサンプル。と知られている

定理1。 $ mathcal {F} $に有限VC次元$ d $がある場合、
  $$ fI { frac {8} {n} left( log { frac}
right() right) geq 1 – delta( frac {4} { delta})+ d log
$$

$ mathcal {R} _n( mathcal {F})$と呼ばれる$ mathcal {F}
$のRademacherの複雑さに基づく以下の境界も知られています。

定理2。 $$ mathbb {P} left( mathcal {F})|
P_n(f)| leq 2 mathcal {R} _n( mathcal {F})+ sqrt { frac {1}
{2n} log( frac {2} { delta})} right) geq 1- delta $$

$ sup_ {f in mathcal {F}} | P_n(f)-P(f)| $
上記のものよりも厳しい境界はありますか

ベストアンサー

This is almost a duplicate (see my comment above) but
I’ll give a quick answer before it (likely) gets closed. The first
inequality in the OP has a superfluous $log(n/d)$, which may be
removed via chaining, as discussed here:
Tight VC bound for agnostic learning

Once that log-factor is removed from the VC bound, it becomes
optimal up to constants (i.e., essentially unimprovable). See
Section 5 in the Anthony-Bartlett book: http://www.cambridge.org/gb/academic/subjects/computer-science/pattern-recognition-and-machine-learning/neural-network-learning-theoretical-foundations?format=HB&isbn=9780521573535#JEHXi1lP4I6Gl8dg.97

Regarding the optimality of the Rademacher bound, see the
discussion and links here concerning Sudakov’s inequality:Rademacher
complexity beyond the agnostic setting

返信を残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です