원서 노트북 업데이트

This commit is contained in:
Haesun Park
2018-05-18 18:14:38 +09:00
parent c8c94e12eb
commit 5064b1dc99
3 changed files with 150 additions and 102 deletions

7
.gitignore vendored
View File

@@ -1,12 +1,17 @@
*.bak *.bak
*.ckpt *.ckpt
*.old
*.pyc *.pyc
.DS_Store .DS_Store
.ipynb_checkpoints .ipynb_checkpoints
checkpoint checkpoint
logs/* logs/*
tf_logs/* tf_logs/*
images/**/*.png
images/**/*.dot
my_* my_*
datasets/words
datasets/flowers datasets/flowers
datasets/lifesat/lifesat.csv
datasets/spam datasets/spam
datasets/words

View File

@@ -134,54 +134,54 @@
"**식 4-2: 선형 회귀 모델의 예측 (벡터 형태)**\n", "**식 4-2: 선형 회귀 모델의 예측 (벡터 형태)**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{y} = h_{\\mathbf{\\theta}}(\\mathbf{x}) = \\mathbf{\\theta}^T \\cdot \\mathbf{x}\n", "\\hat{y} = h_{\\boldsymbol{\\theta}}(\\mathbf{x}) = \\boldsymbol{\\theta}^T \\cdot \\mathbf{x}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-3: 선형 회귀 모델의 MSE 비용 함수**\n", "**식 4-3: 선형 회귀 모델의 MSE 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"\\text{MSE}(\\mathbf{X}, h_{\\mathbf{\\theta}}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{(\\mathbf{\\theta}^T \\cdot \\mathbf{x}^{(i)} - y^{(i)})^2}\n", "\\text{MSE}(\\mathbf{X}, h_{\\boldsymbol{\\theta}}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{(\\boldsymbol{\\theta}^T \\mathbf{x}^{(i)} - y^{(i)})^2}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-4: 정규 방정식**\n", "**식 4-4: 정규 방정식**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{\\mathbf{\\theta}} = (\\mathbf{X}^T \\cdot \\mathbf{X})^{-1} \\cdot \\mathbf{X}^T \\cdot \\mathbf{y}\n", "\\hat{\\boldsymbol{\\theta}} = (\\mathbf{X}^T \\mathbf{X})^{-1} \\mathbf{X}^T \\mathbf{y}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"** 편도함수 기호 (165 페이지):**\n", "** 편도함수 기호 (165 페이지):**\n",
"\n", "\n",
"$\\frac{\\partial}{\\partial \\theta_j} \\text{MSE}(\\mathbf{\\theta})$\n", "$\\frac{\\partial}{\\partial \\theta_j} \\text{MSE}(\\boldsymbol{\\theta})$\n",
"\n", "\n",
"\n", "\n",
"**식 4-5: 비용 함수의 편도함수**\n", "**식 4-5: 비용 함수의 편도함수**\n",
"\n", "\n",
"$\n", "$\n",
"\\dfrac{\\partial}{\\partial \\theta_j} \\text{MSE}(\\mathbf{\\theta}) = \\dfrac{2}{m}\\sum\\limits_{i=1}^{m}(\\mathbf{\\theta}^T \\cdot \\mathbf{x}^{(i)} - y^{(i)})\\, x_j^{(i)}\n", "\\dfrac{\\partial}{\\partial \\theta_j} \\text{MSE}(\\boldsymbol{\\theta}) = \\dfrac{2}{m}\\sum\\limits_{i=1}^{m}(\\boldsymbol{\\theta}^T \\mathbf{x}^{(i)} - y^{(i)})\\, x_j^{(i)}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-6: 비용 함수의 그래디언트 벡터**\n", "**식 4-6: 비용 함수의 그래디언트 벡터**\n",
"\n", "\n",
"$\n", "$\n",
"\\nabla_{\\mathbf{\\theta}}\\, \\text{MSE}(\\mathbf{\\theta}) =\n", "\\nabla_{\\boldsymbol{\\theta}}\\, \\text{MSE}(\\boldsymbol{\\theta}) =\n",
"\\begin{pmatrix}\n", "\\begin{pmatrix}\n",
" \\frac{\\partial}{\\partial \\theta_0} \\text{MSE}(\\mathbf{\\theta}) \\\\\n", " \\frac{\\partial}{\\partial \\theta_0} \\text{MSE}(\\boldsymbol{\\theta}) \\\\\n",
" \\frac{\\partial}{\\partial \\theta_1} \\text{MSE}(\\mathbf{\\theta}) \\\\\n", " \\frac{\\partial}{\\partial \\theta_1} \\text{MSE}(\\boldsymbol{\\theta}) \\\\\n",
" \\vdots \\\\\n", " \\vdots \\\\\n",
" \\frac{\\partial}{\\partial \\theta_n} \\text{MSE}(\\mathbf{\\theta})\n", " \\frac{\\partial}{\\partial \\theta_n} \\text{MSE}(\\boldsymbol{\\theta})\n",
"\\end{pmatrix}\n", "\\end{pmatrix}\n",
" = \\dfrac{2}{m} \\mathbf{X}^T \\cdot (\\mathbf{X} \\cdot \\mathbf{\\theta} - \\mathbf{y})\n", " = \\dfrac{2}{m} \\mathbf{X}^T (\\mathbf{X} \\boldsymbol{\\theta} - \\mathbf{y})\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-7: 경사 하강법의 스텝**\n", "**식 4-7: 경사 하강법의 스텝**\n",
"\n", "\n",
"$\n", "$\n",
"\\mathbf{\\theta}^{(\\text{다음 스텝})}\\,\\,\\, = \\mathbf{\\theta} - \\eta \\nabla_{\\mathbf{\\theta}}\\, \\text{MSE}(\\mathbf{\\theta})\n", "\\boldsymbol{\\theta}^{(\\text{다음 스텝})}\\,\\,\\, = \\boldsymbol{\\theta} - \\eta \\nabla_{\\boldsymbol{\\theta}}\\, \\text{MSE}(\\boldsymbol{\\theta})\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -197,34 +197,34 @@
"$ \\dfrac{(n+d)!}{d!\\,n!} $\n", "$ \\dfrac{(n+d)!}{d!\\,n!} $\n",
"\n", "\n",
"\n", "\n",
"$ \\alpha \\sum_{i=1}^{n}{\\theta_i^2}$\n", "$ \\alpha \\sum_{i=1}^{n}{{\\theta_i}^2}$\n",
"\n", "\n",
"\n", "\n",
"**식 4-8: 릿지 회귀의 비용 함수**\n", "**식 4-8: 릿지 회귀의 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{\\theta}) = \\text{MSE}(\\mathbf{\\theta}) + \\alpha \\dfrac{1}{2}\\sum\\limits_{i=1}^{n}\\theta_i^2\n", "J(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + \\alpha \\dfrac{1}{2}\\sum\\limits_{i=1}^{n}\\theta_i^2\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-9: 릿지 회귀의 정규 방정식**\n", "**식 4-9: 릿지 회귀의 정규 방정식**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{\\mathbf{\\theta}} = (\\mathbf{X}^T \\cdot \\mathbf{X} + \\alpha \\mathbf{A})^{-1} \\cdot \\mathbf{X}^T \\cdot \\mathbf{y}\n", "\\hat{\\boldsymbol{\\theta}} = (\\mathbf{X}^T \\mathbf{X} + \\alpha \\mathbf{A})^{-1} \\mathbf{X}^T \\mathbf{y}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-10: 라쏘 회귀의 비용 함수**\n", "**식 4-10: 라쏘 회귀의 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{\\theta}) = \\text{MSE}(\\mathbf{\\theta}) + \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right|\n", "J(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right|\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-11: 라쏘 회귀의 서브그래디언트 벡터**\n", "**식 4-11: 라쏘 회귀의 서브그래디언트 벡터**\n",
"\n", "\n",
"$\n", "$\n",
"g(\\mathbf{\\theta}, J) = \\nabla_{\\mathbf{\\theta}}\\, \\text{MSE}(\\mathbf{\\theta}) + \\alpha\n", "g(\\boldsymbol{\\theta}, J) = \\nabla_{\\boldsymbol{\\theta}}\\, \\text{MSE}(\\boldsymbol{\\theta}) + \\alpha\n",
"\\begin{pmatrix}\n", "\\begin{pmatrix}\n",
" \\operatorname{sign}(\\theta_1) \\\\\n", " \\operatorname{sign}(\\theta_1) \\\\\n",
" \\operatorname{sign}(\\theta_2) \\\\\n", " \\operatorname{sign}(\\theta_2) \\\\\n",
@@ -242,14 +242,14 @@
"**식 4-12: 엘라스틱넷 비용 함수**\n", "**식 4-12: 엘라스틱넷 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{\\theta}) = \\text{MSE}(\\mathbf{\\theta}) + r \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right| + \\dfrac{1 - r}{2} \\alpha \\sum\\limits_{i=1}^{n}{\\theta_i^2}\n", "J(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + r \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right| + \\dfrac{1 - r}{2} \\alpha \\sum\\limits_{i=1}^{n}{\\theta_i^2}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-13: 로지스틱 회귀 모델의 확률 추정(벡터 표현식)**\n", "**식 4-13: 로지스틱 회귀 모델의 확률 추정(벡터 표현식)**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{p} = h_{\\mathbf{\\theta}}(\\mathbf{x}) = \\sigma(\\mathbf{\\theta}^T \\cdot \\mathbf{x})\n", "\\hat{p} = h_{\\boldsymbol{\\theta}}(\\mathbf{x}) = \\sigma(\\boldsymbol{\\theta}^T \\mathbf{x})\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -274,7 +274,7 @@
"**식 4-16: 하나의 훈련 샘플에 대한 비용 함수**\n", "**식 4-16: 하나의 훈련 샘플에 대한 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"c(\\mathbf{\\theta}) =\n", "c(\\boldsymbol{\\theta}) =\n",
"\\begin{cases}\n", "\\begin{cases}\n",
" -\\log(\\hat{p}) & y = 1 \\text{일 때 } \\\\\n", " -\\log(\\hat{p}) & y = 1 \\text{일 때 } \\\\\n",
" -\\log(1 - \\hat{p}) & y = 0 \\text{일 때 }\n", " -\\log(1 - \\hat{p}) & y = 0 \\text{일 때 }\n",
@@ -285,21 +285,21 @@
"**식 4-17: 로지스틱 회귀의 비용 함수(로그 손실)**\n", "**식 4-17: 로지스틱 회귀의 비용 함수(로그 손실)**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{\\theta}) = -\\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{\\left[ y^{(i)} log\\left(\\hat{p}^{(i)}\\right) + (1 - y^{(i)}) log\\left(1 - \\hat{p}^{(i)}\\right)\\right]}\n", "J(\\boldsymbol{\\theta}) = -\\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{\\left[ y^{(i)} log\\left(\\hat{p}^{(i)}\\right) + (1 - y^{(i)}) log\\left(1 - \\hat{p}^{(i)}\\right)\\right]}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-18: 로지스틱 비용 함수의 편도함수**\n", "**식 4-18: 로지스틱 비용 함수의 편도함수**\n",
"\n", "\n",
"$\n", "$\n",
"\\dfrac{\\partial}{\\partial \\theta_j} \\text{J}(\\mathbf{\\theta}) = \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\left(\\mathbf{\\sigma(\\theta}^T \\cdot \\mathbf{x}^{(i)}) - y^{(i)}\\right)\\, x_j^{(i)}\n", "\\dfrac{\\partial}{\\partial \\theta_j} \\text{J}(\\boldsymbol{\\theta}) = \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\left(\\mathbf{\\sigma(\\boldsymbol{\\theta}}^T \\mathbf{x}^{(i)}) - y^{(i)}\\right)\\, x_j^{(i)}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-19: 클래스 k에 대한 소프트맥스 점수**\n", "**식 4-19: 클래스 k에 대한 소프트맥스 점수**\n",
"\n", "\n",
"$\n", "$\n",
"s_k(\\mathbf{x}) = ({\\mathbf{\\theta}^{(k)}})^T \\cdot \\mathbf{x}\n", "s_k(\\mathbf{x}) = ({\\boldsymbol{\\theta}^{(k)}})^T \\mathbf{x}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -313,24 +313,24 @@
"**식 4-21: 소프트맥스 회귀 분류기의 예측**\n", "**식 4-21: 소프트맥스 회귀 분류기의 예측**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{y} = \\underset{k}{\\operatorname{argmax}} \\, \\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\underset{k}{\\operatorname{argmax}} \\, s_k(\\mathbf{x}) = \\underset{k}{\\operatorname{argmax}} \\, \\left( ({\\mathbf{\\theta}^{(k)}})^T \\cdot \\mathbf{x} \\right)\n", "\\hat{y} = \\underset{k}{\\operatorname{argmax}} \\, \\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\underset{k}{\\operatorname{argmax}} \\, s_k(\\mathbf{x}) = \\underset{k}{\\operatorname{argmax}} \\, \\left( ({\\boldsymbol{\\theta}^{(k)}})^T \\mathbf{x} \\right)\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 4-22: 크로스 엔트로피 비용 함수**\n", "**식 4-22: 크로스 엔트로피 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{\\Theta}) = - \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}\n", "J(\\boldsymbol{\\Theta}) = - \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}\n",
"$\n", "$\n",
"\n", "\n",
"**두 확률 분포 $p$ 와 $q$ 사이의 크로스 엔트로피 (196 페이지):**\n", "**두 확률 분포 $p$ 와 $q$ 사이의 크로스 엔트로피 (196 페이지):**\n",
"$ H(p, q) = -\\sum\\limits_{x}p(x) \\log q(x) $\n", "$ H(p, q) = -\\sum\\limits_{x}p(x) \\log q(x) $\n",
"\n", "\n",
"\n", "\n",
"**식 4-23: 클래스 k 에 대한 크로스 엔트로피의 그래디언트 벡터**\n", "**식 4-23: 클래스 _k_ 에 대한 크로스 엔트로피의 그래디언트 벡터**\n",
"\n", "\n",
"$\n", "$\n",
"\\nabla_{\\mathbf{\\theta}^{(k)}} \\, J(\\mathbf{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}\n", "\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\, J(\\boldsymbol{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}\n",
"$\n" "$\n"
] ]
}, },
@@ -345,7 +345,7 @@
"**식 5-1: 가우시안 RBF**\n", "**식 5-1: 가우시안 RBF**\n",
"\n", "\n",
"$\n", "$\n",
"{\\displaystyle \\phi_{\\gamma}(\\mathbf{x}, \\mathbf{\\ell})} = {\\displaystyle \\exp({\\displaystyle -\\gamma \\left\\| \\mathbf{x} - \\mathbf{\\ell} \\right\\|^2})}\n", "{\\displaystyle \\phi_{\\gamma}(\\mathbf{x}, \\boldsymbol{\\ell})} = {\\displaystyle \\exp({\\displaystyle -\\gamma \\left\\| \\mathbf{x} - \\boldsymbol{\\ell} \\right\\|^2})}\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -353,8 +353,8 @@
"\n", "\n",
"$\n", "$\n",
"\\hat{y} = \\begin{cases}\n", "\\hat{y} = \\begin{cases}\n",
" 0 & \\mathbf{w}^T \\cdot \\mathbf{x} + b < 0 \\text{일 때 } \\\\\n", " 0 & \\mathbf{w}^T \\mathbf{x} + b < 0 \\text{일 때 } \\\\\n",
" 1 & \\mathbf{w}^T \\cdot \\mathbf{x} + b \\geq 0 \\text{일 때 }\n", " 1 & \\mathbf{w}^T \\mathbf{x} + b \\geq 0 \\text{일 때 }\n",
"\\end{cases}\n", "\\end{cases}\n",
"$\n", "$\n",
"\n", "\n",
@@ -363,8 +363,8 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"&\\underset{\\mathbf{w}, b}{\\operatorname{minimize}}\\,{\\frac{1}{2}\\mathbf{w}^T \\cdot \\mathbf{w}} \\\\\n", "&\\underset{\\mathbf{w}, b}{\\operatorname{minimize}}\\,{\\frac{1}{2}\\mathbf{w}^T \\mathbf{w}} \\\\\n",
"&[\\text{조건}] \\, i = 1, 2, \\dots, m \\text{일 때} \\quad t^{(i)}(\\mathbf{w}^T \\cdot \\mathbf{x}^{(i)} + b) \\ge 1\n", "&[\\text{조건}] \\, i = 1, 2, \\dots, m \\text{일 때} \\quad t^{(i)}(\\mathbf{w}^T \\mathbf{x}^{(i)} + b) \\ge 1\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
"\n", "\n",
@@ -373,8 +373,8 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"&\\underset{\\mathbf{w}, b, \\mathbf{\\zeta}}{\\operatorname{minimize}}\\,{\\dfrac{1}{2}\\mathbf{w}^T \\cdot \\mathbf{w} + C \\sum\\limits_{i=1}^m{\\zeta^{(i)}}}\\\\\n", "&\\underset{\\mathbf{w}, b, \\mathbf{\\zeta}}{\\operatorname{minimize}}\\,{\\dfrac{1}{2}\\mathbf{w}^T \\mathbf{w} + C \\sum\\limits_{i=1}^m{\\zeta^{(i)}}}\\\\\n",
"&[\\text{조건}] \\, i = 1, 2, \\dots, m \\text{일 때} \\quad t^{(i)}(\\mathbf{w}^T \\cdot \\mathbf{x}^{(i)} + b) \\ge 1 - \\zeta^{(i)} \\text{ 이고} \\quad \\zeta^{(i)} \\ge 0\n", "&[\\text{조건}] \\, i = 1, 2, \\dots, m \\text{일 때} \\quad t^{(i)}(\\mathbf{w}^T \\mathbf{x}^{(i)} + b) \\ge 1 - \\zeta^{(i)} \\text{ 이고} \\quad \\zeta^{(i)} \\ge 0\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
"\n", "\n",
@@ -383,8 +383,8 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\underset{\\mathbf{p}}{\\text{minimize}} \\, & \\dfrac{1}{2} \\mathbf{p}^T \\cdot \\mathbf{H} \\cdot \\mathbf{p} \\, + \\, \\mathbf{f}^T \\cdot \\mathbf{p} \\\\\n", "\\underset{\\mathbf{p}}{\\text{minimize}} \\, & \\dfrac{1}{2} \\mathbf{p}^T \\mathbf{H} \\mathbf{p} \\, + \\, \\mathbf{f}^T \\mathbf{p} \\\\\n",
"[\\text{조건}] \\, & \\mathbf{A} \\cdot \\mathbf{p} \\le \\mathbf{b} \\\\\n", "[\\text{조건}] \\, & \\mathbf{A} \\mathbf{p} \\le \\mathbf{b} \\\\\n",
"\\text{여기서 } &\n", "\\text{여기서 } &\n",
"\\begin{cases}\n", "\\begin{cases}\n",
" \\mathbf{p} \\, \\text{는 }n_p\\text{ 차원의 벡터 (} n_p = \\text{모델 파라미터 수)}\\\\\n", " \\mathbf{p} \\, \\text{는 }n_p\\text{ 차원의 벡터 (} n_p = \\text{모델 파라미터 수)}\\\\\n",
@@ -404,7 +404,7 @@
"&\\underset{\\mathbf{\\alpha}}{\\operatorname{minimize}} \\,\n", "&\\underset{\\mathbf{\\alpha}}{\\operatorname{minimize}} \\,\n",
"\\dfrac{1}{2}\\sum\\limits_{i=1}^{m}{\n", "\\dfrac{1}{2}\\sum\\limits_{i=1}^{m}{\n",
" \\sum\\limits_{j=1}^{m}{\n", " \\sum\\limits_{j=1}^{m}{\n",
" \\alpha^{(i)} \\alpha^{(j)} t^{(i)} t^{(j)} {\\mathbf{x}^{(i)}}^T \\cdot \\mathbf{x}^{(j)}\n", " \\alpha^{(i)} \\alpha^{(j)} t^{(i)} t^{(j)} {\\mathbf{x}^{(i)}}^T \\mathbf{x}^{(j)}\n",
" }\n", " }\n",
"} \\, - \\, \\sum\\limits_{i=1}^{m}{\\alpha^{(i)}}\\\\\n", "} \\, - \\, \\sum\\limits_{i=1}^{m}{\\alpha^{(i)}}\\\\\n",
"&\\text{[조건]}\\,i = 1, 2, \\dots, m \\text{일 때 } \\quad \\alpha^{(i)} \\ge 0\n", "&\\text{[조건]}\\,i = 1, 2, \\dots, m \\text{일 때 } \\quad \\alpha^{(i)} \\ge 0\n",
@@ -417,7 +417,7 @@
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"&\\hat{\\mathbf{w}} = \\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\mathbf{x}^{(i)}\\\\\n", "&\\hat{\\mathbf{w}} = \\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\mathbf{x}^{(i)}\\\\\n",
"&\\hat{b} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(1 - t^{(i)}({\\hat{\\mathbf{w}}}^T \\cdot \\mathbf{x}^{(i)})\\right)}\n", "&\\hat{b} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - ({\\hat{\\mathbf{w}}}^T \\mathbf{x}^{(i)})\\right)}\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
"\n", "\n",
@@ -440,11 +440,11 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\phi(\\mathbf{a})^T \\cdot \\phi(\\mathbf{b}) & \\quad = \\begin{pmatrix}\n", "\\phi(\\mathbf{a})^T \\phi(\\mathbf{b}) & \\quad = \\begin{pmatrix}\n",
" {a_1}^2 \\\\\n", " {a_1}^2 \\\\\n",
" \\sqrt{2} \\, a_1 a_2 \\\\\n", " \\sqrt{2} \\, a_1 a_2 \\\\\n",
" {a_2}^2\n", " {a_2}^2\n",
" \\end{pmatrix}^T \\cdot \\begin{pmatrix}\n", " \\end{pmatrix}^T \\begin{pmatrix}\n",
" {b_1}^2 \\\\\n", " {b_1}^2 \\\\\n",
" \\sqrt{2} \\, b_1 b_2 \\\\\n", " \\sqrt{2} \\, b_1 b_2 \\\\\n",
" {b_2}^2\n", " {b_2}^2\n",
@@ -452,25 +452,25 @@
" & \\quad = \\left( a_1 b_1 + a_2 b_2 \\right)^2 = \\left( \\begin{pmatrix}\n", " & \\quad = \\left( a_1 b_1 + a_2 b_2 \\right)^2 = \\left( \\begin{pmatrix}\n",
" a_1 \\\\\n", " a_1 \\\\\n",
" a_2\n", " a_2\n",
"\\end{pmatrix}^T \\cdot \\begin{pmatrix}\n", "\\end{pmatrix}^T \\begin{pmatrix}\n",
" b_1 \\\\\n", " b_1 \\\\\n",
" b_2\n", " b_2\n",
" \\end{pmatrix} \\right)^2 = (\\mathbf{a}^T \\cdot \\mathbf{b})^2\n", " \\end{pmatrix} \\right)^2 = (\\mathbf{a}^T \\mathbf{b})^2\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
"\n", "\n",
"**커널 트릭에 관한 본문 중에서 (220 페이지):**\n", "**커널 트릭에 관한 본문 중에서 (220 페이지):**\n",
"[...] 변환된 벡터의 점곱을 간단하게 $ ({\\mathbf{x}^{(i)}}^T \\cdot \\mathbf{x}^{(j)})^2 $ 으로 바꿀 수 있습니다.\n", "[...] 변환된 벡터의 점곱을 간단하게 $ ({\\mathbf{x}^{(i)}}^T \\mathbf{x}^{(j)})^2 $ 으로 바꿀 수 있습니다.\n",
"\n", "\n",
"\n", "\n",
"**식 5-10: 일반적인 커널**\n", "**식 5-10: 일반적인 커널**\n",
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\text{선형:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\mathbf{a}^T \\cdot \\mathbf{b} \\\\\n", "\\text{선형:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\mathbf{a}^T \\mathbf{b} \\\\\n",
"\\text{다항식:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\left(\\gamma \\mathbf{a}^T \\cdot \\mathbf{b} + r \\right)^d \\\\\n", "\\text{다항식:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\left(\\gamma \\mathbf{a}^T \\mathbf{b} + r \\right)^d \\\\\n",
"\\text{가우시안 RBF:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\exp({\\displaystyle -\\gamma \\left\\| \\mathbf{a} - \\mathbf{b} \\right\\|^2}) \\\\\n", "\\text{가우시안 RBF:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\exp({\\displaystyle -\\gamma \\left\\| \\mathbf{a} - \\mathbf{b} \\right\\|^2}) \\\\\n",
"\\text{시그모이드:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\tanh\\left(\\gamma \\mathbf{a}^T \\cdot \\mathbf{b} + r\\right)\n", "\\text{시그모이드:} & \\quad K(\\mathbf{a}, \\mathbf{b}) = \\tanh\\left(\\gamma \\mathbf{a}^T \\mathbf{b} + r\\right)\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
"\n", "\n",
@@ -478,8 +478,8 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"h_{\\hat{\\mathbf{w}}, \\hat{b}}\\left(\\phi(\\mathbf{x}^{(n)})\\right) & = \\,\\hat{\\mathbf{w}}^T \\cdot \\phi(\\mathbf{x}^{(n)}) + \\hat{b} = \\left(\\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\phi(\\mathbf{x}^{(i)})\\right)^T \\cdot \\phi(\\mathbf{x}^{(n)}) + \\hat{b}\\\\\n", "h_{\\hat{\\mathbf{w}}, \\hat{b}}\\left(\\phi(\\mathbf{x}^{(n)})\\right) & = \\,\\hat{\\mathbf{w}}^T \\phi(\\mathbf{x}^{(n)}) + \\hat{b} = \\left(\\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\phi(\\mathbf{x}^{(i)})\\right)^T \\phi(\\mathbf{x}^{(n)}) + \\hat{b}\\\\\n",
" & = \\, \\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\left(\\phi(\\mathbf{x}^{(i)})^T \\cdot \\phi(\\mathbf{x}^{(n)})\\right) + \\hat{b}\\\\\n", " & = \\, \\sum_{i=1}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)}\\left(\\phi(\\mathbf{x}^{(i)})^T \\phi(\\mathbf{x}^{(n)})\\right) + \\hat{b}\\\\\n",
" & = \\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)} K(\\mathbf{x}^{(i)}, \\mathbf{x}^{(n)}) + \\hat{b}\n", " & = \\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\hat{\\alpha}}^{(i)}t^{(i)} K(\\mathbf{x}^{(i)}, \\mathbf{x}^{(n)}) + \\hat{b}\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
@@ -489,9 +489,9 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\hat{b} & = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - {\\hat{\\mathbf{w}}}^T \\cdot \\phi(\\mathbf{x}^{(i)})\\right)} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - {\n", "\\hat{b} & = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - {\\hat{\\mathbf{w}}}^T \\phi(\\mathbf{x}^{(i)})\\right)} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - {\n",
" \\left(\\sum_{j=1}^{m}{\\hat{\\alpha}}^{(j)}t^{(j)}\\phi(\\mathbf{x}^{(j)})\\right)\n", " \\left(\\sum_{j=1}^{m}{\\hat{\\alpha}}^{(j)}t^{(j)}\\phi(\\mathbf{x}^{(j)})\\right)\n",
" }^T \\cdot \\phi(\\mathbf{x}^{(i)})\\right)}\\\\\n", " }^T \\phi(\\mathbf{x}^{(i)})\\right)}\\\\\n",
" & = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - \n", " & = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left(t^{(i)} - \n",
"\\sum\\limits_{\\scriptstyle j=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(j)} > 0}}^{m}{\n", "\\sum\\limits_{\\scriptstyle j=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(j)} > 0}}^{m}{\n",
" {\\hat{\\alpha}}^{(j)} t^{(j)} K(\\mathbf{x}^{(i)},\\mathbf{x}^{(j)})\n", " {\\hat{\\alpha}}^{(j)} t^{(j)} K(\\mathbf{x}^{(i)},\\mathbf{x}^{(j)})\n",
@@ -504,7 +504,7 @@
"**식 5-13: 선형 SVM 분류기 비용 함수**\n", "**식 5-13: 선형 SVM 분류기 비용 함수**\n",
"\n", "\n",
"$\n", "$\n",
"J(\\mathbf{w}, b) = \\dfrac{1}{2} \\mathbf{w}^T \\cdot \\mathbf{w} \\, + \\, C {\\displaystyle \\sum\\limits_{i=1}^{m}max\\left(0, 1 - t^{(i)}(\\mathbf{w}^T \\cdot \\mathbf{x}^{(i)} + b) \\right)}\n", "J(\\mathbf{w}, b) = \\dfrac{1}{2} \\mathbf{w}^T \\mathbf{w} \\, + \\, C {\\displaystyle \\sum\\limits_{i=1}^{m}max\\left(0, t^{(i)} - (\\mathbf{w}^T \\mathbf{x}^{(i)} + b) \\right)}\n",
"$\n", "$\n",
"\n", "\n",
"\n" "\n"
@@ -637,14 +637,14 @@
"**식 8-2: 훈련 세트를 _d_차원으로 투영하기**\n", "**식 8-2: 훈련 세트를 _d_차원으로 투영하기**\n",
"\n", "\n",
"$\n", "$\n",
"\\mathbf{X}_{d\\text{-proj}} = \\mathbf{X} \\cdot \\mathbf{W}_d\n", "\\mathbf{X}_{d\\text{-proj}} = \\mathbf{X} \\mathbf{W}_d\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
"**식 8-3: 원본의 차원 수로 되돌리는 PCA 역변환**\n", "**식 8-3: 원본의 차원 수로 되돌리는 PCA 역변환**\n",
"\n", "\n",
"$\n", "$\n",
"\\mathbf{X}_{\\text{recovered}} = \\mathbf{X}_{d\\text{-proj}} \\cdot {\\mathbf{W}_d}^T\n", "\\mathbf{X}_{\\text{recovered}} = \\mathbf{X}_{d\\text{-proj}} {\\mathbf{W}_d}^T\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -685,7 +685,7 @@
"**식 9-1: ReLU 함수**\n", "**식 9-1: ReLU 함수**\n",
"\n", "\n",
"$\n", "$\n",
"h_{\\mathbf{w}, b}(\\mathbf{X}) = \\max(\\mathbf{X} \\cdot \\mathbf{w} + b, 0)\n", "h_{\\mathbf{w}, b}(\\mathbf{X}) = \\max(\\mathbf{X} \\mathbf{w} + b, 0)\n",
"$" "$"
] ]
}, },
@@ -791,8 +791,8 @@
"\n", "\n",
"**식 11-4: 모멘텀 알고리즘**\n", "**식 11-4: 모멘텀 알고리즘**\n",
"\n", "\n",
"1. $\\mathbf{m} \\gets \\beta \\mathbf{m} - \\eta \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta})$\n", "1. $\\mathbf{m} \\gets \\beta \\mathbf{m} - \\eta \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta})$\n",
"2. $\\mathbf{\\theta} \\gets \\mathbf{\\theta} + \\mathbf{m}$\n", "2. $\\boldsymbol{\\theta} \\gets \\boldsymbol{\\theta} + \\mathbf{m}$\n",
"\n", "\n",
"**377 페이지에서**\n", "**377 페이지에서**\n",
"\n", "\n",
@@ -800,35 +800,35 @@
"\n", "\n",
"**식 11-5: 네스테로프 가속 경사 알고리즘**\n", "**식 11-5: 네스테로프 가속 경사 알고리즘**\n",
"\n", "\n",
"1. $\\mathbf{m} \\gets \\beta \\mathbf{m} - \\eta \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta} + \\beta \\mathbf{m})$\n", "1. $\\mathbf{m} \\gets \\beta \\mathbf{m} - \\eta \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta} + \\beta \\mathbf{m})$\n",
"2. $\\mathbf{\\theta} \\gets \\mathbf{\\theta} + \\mathbf{m}$\n", "2. $\\boldsymbol{\\theta} \\gets \\boldsymbol{\\theta} + \\mathbf{m}$\n",
"\n", "\n",
"**식 11-6: AdaGrad 알고리즘**\n", "**식 11-6: AdaGrad 알고리즘**\n",
"\n", "\n",
"1. $\\mathbf{s} \\gets \\mathbf{s} + \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta}) \\otimes \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta})$\n", "1. $\\mathbf{s} \\gets \\mathbf{s} + \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta}) \\otimes \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta})$\n",
"2. $\\mathbf{\\theta} \\gets \\mathbf{\\theta} - \\eta \\, \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta}) \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n", "2. $\\boldsymbol{\\theta} \\gets \\boldsymbol{\\theta} - \\eta \\, \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta}) \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n",
"\n", "\n",
"**381 페이지 본문 중에서**\n", "**381 페이지 본문 중에서**\n",
"\n", "\n",
"이 벡터 형식의 계산은 벡터 $\\mathbf{s}$의 각 원소 $s_i$마다 $s_i \\gets s_i + \\left( \\dfrac{\\partial J(\\mathbf{\\theta})}{\\partial \\theta_i} \\right)^2$ 을 계산하는 것과 동일합니다.\n", "이 벡터 형식의 계산은 벡터 $\\mathbf{s}$의 각 원소 $s_i$마다 $s_i \\gets s_i + \\left( \\dfrac{\\partial J(\\boldsymbol{\\theta})}{\\partial \\theta_i} \\right)^2$ 을 계산하는 것과 동일합니다.\n",
"\n", "\n",
"**381 페이지 본문 중에서**\n", "**381 페이지 본문 중에서**\n",
"\n", "\n",
"이 벡터 형식의 계산은 모든 파라미터 $\\theta_i$에 대해 (동시에) $ \\theta_i \\gets \\theta_i - \\eta \\, \\dfrac{\\partial J(\\mathbf{\\theta})}{\\partial \\theta_i} \\dfrac{1}{\\sqrt{s_i + \\epsilon}} $ 을 계산하는 것과 동일합니다.\n", "이 벡터 형식의 계산은 모든 파라미터 $\\theta_i$에 대해 (동시에) $ \\theta_i \\gets \\theta_i - \\eta \\, \\dfrac{\\partial J(\\boldsymbol{\\theta})}{\\partial \\theta_i} \\dfrac{1}{\\sqrt{s_i + \\epsilon}} $ 을 계산하는 것과 동일합니다.\n",
"\n", "\n",
"**식 11-7: RMSProp 알고리즘**\n", "**식 11-7: RMSProp 알고리즘**\n",
"\n", "\n",
"1. $\\mathbf{s} \\gets \\beta \\mathbf{s} + (1 - \\beta ) \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta}) \\otimes \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta})$\n", "1. $\\mathbf{s} \\gets \\beta \\mathbf{s} + (1 - \\beta ) \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta}) \\otimes \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta})$\n",
"2. $\\mathbf{\\theta} \\gets \\mathbf{\\theta} - \\eta \\, \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta}) \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n", "2. $\\boldsymbol{\\theta} \\gets \\boldsymbol{\\theta} - \\eta \\, \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta}) \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n",
"\n", "\n",
"\n", "\n",
"**식 11-8: Adam 알고리즘**\n", "**식 11-8: Adam 알고리즘**\n",
"\n", "\n",
"1. $\\mathbf{m} \\gets \\beta_1 \\mathbf{m} - (1 - \\beta_1) \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta})$\n", "1. $\\mathbf{m} \\gets \\beta_1 \\mathbf{m} - (1 - \\beta_1) \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta})$\n",
"2. $\\mathbf{s} \\gets \\beta_2 \\mathbf{s} + (1 - \\beta_2) \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta}) \\otimes \\nabla_\\mathbf{\\theta}J(\\mathbf{\\theta})$\n", "2. $\\mathbf{s} \\gets \\beta_2 \\mathbf{s} + (1 - \\beta_2) \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta}) \\otimes \\nabla_\\boldsymbol{\\theta}J(\\boldsymbol{\\theta})$\n",
"3. $\\mathbf{m} \\gets \\left(\\dfrac{\\mathbf{m}}{1 - {\\beta_1}^T}\\right)$\n", "3. $\\mathbf{m} \\gets \\left(\\dfrac{\\mathbf{m}}{1 - {\\beta_1}^T}\\right)$\n",
"4. $\\mathbf{s} \\gets \\left(\\dfrac{\\mathbf{s}}{1 - {\\beta_2}^T}\\right)$\n", "4. $\\mathbf{s} \\gets \\left(\\dfrac{\\mathbf{s}}{1 - {\\beta_2}^T}\\right)$\n",
"5. $\\mathbf{\\theta} \\gets \\mathbf{\\theta} + \\eta \\, \\mathbf{m} \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n", "5. $\\boldsymbol{\\theta} \\gets \\boldsymbol{\\theta} + \\eta \\, \\mathbf{m} \\oslash {\\sqrt{\\mathbf{s} + \\epsilon}}$\n",
"\n", "\n",
"**393 페이지 본문 중에서**\n", "**393 페이지 본문 중에서**\n",
"\n", "\n",
@@ -847,7 +847,7 @@
"**식 13-1: 합성곱층에 있는 뉴런의 출력 계산**\n", "**식 13-1: 합성곱층에 있는 뉴런의 출력 계산**\n",
"\n", "\n",
"$\n", "$\n",
"z_{i,j,k} = b_k + \\sum\\limits_{u = 0}^{f_h - 1} \\, \\, \\sum\\limits_{v = 0}^{f_w - 1} \\, \\, \\sum\\limits_{k' = 0}^{f_{n'} - 1} \\, \\, x_{i', j', k'} . w_{u, v, k', k}\n", "z_{i,j,k} = b_k + \\sum\\limits_{u = 0}^{f_h - 1} \\, \\, \\sum\\limits_{v = 0}^{f_w - 1} \\, \\, \\sum\\limits_{k' = 0}^{f_{n'} - 1} \\, \\, x_{i', j', k'} \\times w_{u, v, k', k}\n",
"\\quad \\text{여기서 }\n", "\\quad \\text{여기서 }\n",
"\\begin{cases}\n", "\\begin{cases}\n",
"i' = i \\times s_h + u \\\\\n", "i' = i \\times s_h + u \\\\\n",
@@ -878,7 +878,7 @@
"**식 14-1: 하나의 샘플에 대한 순환 층의 출력**\n", "**식 14-1: 하나의 샘플에 대한 순환 층의 출력**\n",
"\n", "\n",
"$\n", "$\n",
"\\mathbf{y}_{(t)} = \\phi\\left({{\\mathbf{x}_{(t)}}^T \\cdot \\mathbf{w}_x} + {\\mathbf{y}_{(t-1)}}^T \\cdot {\\mathbf{w}_y} + b \\right)\n", "\\mathbf{y}_{(t)} = \\phi\\left({\\mathbf{W}_x}^T{\\mathbf{x}_{(t)}} + {{\\mathbf{W}_y}^T\\mathbf{y}_{(t-1)}} + \\mathbf{b} \\right)\n",
"$\n", "$\n",
"\n", "\n",
"\n", "\n",
@@ -886,10 +886,10 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\mathbf{Y}_{(t)} & = \\phi\\left(\\mathbf{X}_{(t)} \\cdot \\mathbf{W}_{x} + \\mathbf{Y}_{(t-1)}\\cdot \\mathbf{W}_{y} + \\mathbf{b} \\right) \\\\\n", "\\mathbf{Y}_{(t)} & = \\phi\\left(\\mathbf{X}_{(t)} \\mathbf{W}_{x} + \\mathbf{Y}_{(t-1)} \\mathbf{W}_{y} + \\mathbf{b} \\right) \\\\\n",
"& = \\phi\\left(\n", "& = \\phi\\left(\n",
"\\left[\\mathbf{X}_{(t)} \\quad \\mathbf{Y}_{(t-1)} \\right]\n", "\\left[\\mathbf{X}_{(t)} \\quad \\mathbf{Y}_{(t-1)} \\right]\n",
" \\cdot \\mathbf{W} + \\mathbf{b} \\right) \\quad \\text{ 여기서 } \\mathbf{W}=\n", " \\mathbf{W} + \\mathbf{b} \\right) \\quad \\text{ 여기서 } \\mathbf{W}=\n",
"\\left[ \\begin{matrix}\n", "\\left[ \\begin{matrix}\n",
" \\mathbf{W}_x\\\\\n", " \\mathbf{W}_x\\\\\n",
" \\mathbf{W}_y\n", " \\mathbf{W}_y\n",
@@ -905,10 +905,10 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\mathbf{i}_{(t)}&=\\sigma({\\mathbf{W}_{xi}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hi}}^T \\cdot \\mathbf{h}_{(t-1)} + \\mathbf{b}_i)\\\\\n", "\\mathbf{i}_{(t)}&=\\sigma({\\mathbf{W}_{xi}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hi}}^T \\mathbf{h}_{(t-1)} + \\mathbf{b}_i)\\\\\n",
"\\mathbf{f}_{(t)}&=\\sigma({\\mathbf{W}_{xf}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hf}}^T \\cdot \\mathbf{h}_{(t-1)} + \\mathbf{b}_f)\\\\\n", "\\mathbf{f}_{(t)}&=\\sigma({\\mathbf{W}_{xf}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hf}}^T \\mathbf{h}_{(t-1)} + \\mathbf{b}_f)\\\\\n",
"\\mathbf{o}_{(t)}&=\\sigma({\\mathbf{W}_{xo}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{ho}}^T \\cdot \\mathbf{h}_{(t-1)} + \\mathbf{b}_o)\\\\\n", "\\mathbf{o}_{(t)}&=\\sigma({\\mathbf{W}_{xo}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{ho}}^T \\mathbf{h}_{(t-1)} + \\mathbf{b}_o)\\\\\n",
"\\mathbf{g}_{(t)}&=\\operatorname{tanh}({\\mathbf{W}_{xg}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hg}}^T \\cdot \\mathbf{h}_{(t-1)} + \\mathbf{b}_g)\\\\\n", "\\mathbf{g}_{(t)}&=\\operatorname{tanh}({\\mathbf{W}_{xg}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hg}}^T \\mathbf{h}_{(t-1)} + \\mathbf{b}_g)\\\\\n",
"\\mathbf{c}_{(t)}&=\\mathbf{f}_{(t)} \\otimes \\mathbf{c}_{(t-1)} \\, + \\, \\mathbf{i}_{(t)} \\otimes \\mathbf{g}_{(t)}\\\\\n", "\\mathbf{c}_{(t)}&=\\mathbf{f}_{(t)} \\otimes \\mathbf{c}_{(t-1)} \\, + \\, \\mathbf{i}_{(t)} \\otimes \\mathbf{g}_{(t)}\\\\\n",
"\\mathbf{y}_{(t)}&=\\mathbf{h}_{(t)} = \\mathbf{o}_{(t)} \\otimes \\operatorname{tanh}(\\mathbf{c}_{(t)})\n", "\\mathbf{y}_{(t)}&=\\mathbf{h}_{(t)} = \\mathbf{o}_{(t)} \\otimes \\operatorname{tanh}(\\mathbf{c}_{(t)})\n",
"\\end{split}\n", "\\end{split}\n",
@@ -919,9 +919,9 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\mathbf{z}_{(t)}&=\\sigma({\\mathbf{W}_{xz}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hz}}^T \\cdot \\mathbf{h}_{(t-1)}) \\\\\n", "\\mathbf{z}_{(t)}&=\\sigma({\\mathbf{W}_{xz}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hz}}^T \\mathbf{h}_{(t-1)}) \\\\\n",
"\\mathbf{r}_{(t)}&=\\sigma({\\mathbf{W}_{xr}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hr}}^T \\cdot \\mathbf{h}_{(t-1)}) \\\\\n", "\\mathbf{r}_{(t)}&=\\sigma({\\mathbf{W}_{xr}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hr}}^T \\mathbf{h}_{(t-1)}) \\\\\n",
"\\mathbf{g}_{(t)}&=\\operatorname{tanh}\\left({\\mathbf{W}_{xg}}^T \\cdot \\mathbf{x}_{(t)} + {\\mathbf{W}_{hg}}^T \\cdot (\\mathbf{r}_{(t)} \\otimes \\mathbf{h}_{(t-1)})\\right) \\\\\n", "\\mathbf{g}_{(t)}&=\\operatorname{tanh}\\left({\\mathbf{W}_{xg}}^T \\mathbf{x}_{(t)} + {\\mathbf{W}_{hg}}^T (\\mathbf{r}_{(t)} \\otimes \\mathbf{h}_{(t-1)})\\right) \\\\\n",
"\\mathbf{h}_{(t)}&=(1-\\mathbf{z}_{(t)}) \\otimes \\mathbf{h}_{(t-1)} + \\mathbf{z}_{(t)} \\otimes \\mathbf{g}_{(t)}\n", "\\mathbf{h}_{(t)}&=(1-\\mathbf{z}_{(t)}) \\otimes \\mathbf{h}_{(t-1)} + \\mathbf{z}_{(t)} \\otimes \\mathbf{g}_{(t)}\n",
"\\end{split}\n", "\\end{split}\n",
"$" "$"
@@ -1002,14 +1002,14 @@
"**식 16-6: 탐험 함수를 사용한 Q-러닝**\n", "**식 16-6: 탐험 함수를 사용한 Q-러닝**\n",
"\n", "\n",
"$\n", "$\n",
" Q(s, a) \\gets (1-\\alpha)Q(s,a) + \\alpha\\left(r + \\gamma . \\underset{\\alpha'}{\\max}f(Q(s', a'), N(s', a'))\\right)\n", "Q(s, a) \\gets (1-\\alpha)Q(s,a) + \\alpha\\left(r + \\gamma \\, \\underset{a'}{\\max}f(Q(s', a'), N(s', a'))\\right)\n",
"$\n", "$\n",
"\n", "\n",
"**식 16-7: 타깃 Q-가치**\n", "**식 16-7: 타깃 Q-가치**\n",
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"y(s, a) = r + \\gamma . \\underset{a'}{\\max}Q_{\\theta}(s', a')\n", "y(s, a) = r + \\gamma\\,\\max_{a'}\\,Q_\\boldsymbol\\theta(s', a')\n",
"\\end{split}\n", "\\end{split}\n",
"$" "$"
] ]
@@ -1109,7 +1109,7 @@
"\n", "\n",
"$\n", "$\n",
"\\begin{split}\n", "\\begin{split}\n",
"\\mathcal{L}(\\mathbf{w}, b, \\mathbf{\\alpha}) = \\frac{1}{2}\\mathbf{w}^T \\cdot \\mathbf{w} - \\sum\\limits_{i=1}^{m}{\\alpha^{(i)} \\left(t^{(i)}(\\mathbf{w}^T \\cdot \\mathbf{x}^{(i)} + b) - 1\\right)} \\\\\n", "\\mathcal{L}(\\mathbf{w}, b, \\mathbf{\\alpha}) = \\frac{1}{2}\\mathbf{w}^T \\mathbf{w} - \\sum\\limits_{i=1}^{m}{\\alpha^{(i)} \\left(t^{(i)}(\\mathbf{w}^T \\mathbf{x}^{(i)} + b) - 1\\right)} \\\\\n",
"\\text{여기서 } \\alpha^{(i)} \\ge 0 \\quad i = 1, 2, \\dots, m \\text{ 에 대해}\n", "\\text{여기서 } \\alpha^{(i)} \\ge 0 \\quad i = 1, 2, \\dots, m \\text{ 에 대해}\n",
"\\end{split}\n", "\\end{split}\n",
"$\n", "$\n",
@@ -1119,7 +1119,7 @@
"$ (\\hat{\\mathbf{w}}, \\hat{b}, \\hat{\\mathbf{\\alpha}}) $\n", "$ (\\hat{\\mathbf{w}}, \\hat{b}, \\hat{\\mathbf{\\alpha}}) $\n",
"\n", "\n",
"\n", "\n",
"$ t^{(i)}((\\hat{\\mathbf{w}})^T \\cdot \\mathbf{x}^{(i)} + \\hat{b}) \\ge 1 \\quad \\text{for } i = 1, 2, \\dots, m $\n", "$ t^{(i)}((\\hat{\\mathbf{w}})^T \\mathbf{x}^{(i)} + \\hat{b}) \\ge 1 \\quad \\text{for } i = 1, 2, \\dots, m $\n",
"\n", "\n",
"\n", "\n",
"$ {\\hat{\\alpha}}^{(i)} \\ge 0 \\quad \\text{for } i = 1, 2, \\dots, m $\n", "$ {\\hat{\\alpha}}^{(i)} \\ge 0 \\quad \\text{for } i = 1, 2, \\dots, m $\n",
@@ -1128,7 +1128,7 @@
"$ {\\hat{\\alpha}}^{(i)} = 0 $\n", "$ {\\hat{\\alpha}}^{(i)} = 0 $\n",
"\n", "\n",
"\n", "\n",
"$ t^{(i)}((\\hat{\\mathbf{w}})^T \\cdot \\mathbf{x}^{(i)} + \\hat{b}) = 1 $\n", "$ t^{(i)}((\\hat{\\mathbf{w}})^T \\mathbf{x}^{(i)} + \\hat{b}) = 1 $\n",
"\n", "\n",
"\n", "\n",
"$ {\\hat{\\alpha}}^{(i)} = 0 $\n", "$ {\\hat{\\alpha}}^{(i)} = 0 $\n",
@@ -1160,7 +1160,7 @@
"\\begin{split}\n", "\\begin{split}\n",
"\\mathcal{L}(\\hat{\\mathbf{w}}, \\hat{b}, \\mathbf{\\alpha}) = \\dfrac{1}{2}\\sum\\limits_{i=1}^{m}{\n", "\\mathcal{L}(\\hat{\\mathbf{w}}, \\hat{b}, \\mathbf{\\alpha}) = \\dfrac{1}{2}\\sum\\limits_{i=1}^{m}{\n",
" \\sum\\limits_{j=1}^{m}{\n", " \\sum\\limits_{j=1}^{m}{\n",
" \\alpha^{(i)} \\alpha^{(j)} t^{(i)} t^{(j)} {\\mathbf{x}^{(i)}}^T \\cdot \\mathbf{x}^{(j)}\n", " \\alpha^{(i)} \\alpha^{(j)} t^{(i)} t^{(j)} {\\mathbf{x}^{(i)}}^T \\mathbf{x}^{(j)}\n",
" }\n", " }\n",
"} \\quad - \\quad \\sum\\limits_{i=1}^{m}{\\alpha^{(i)}}\\\\\n", "} \\quad - \\quad \\sum\\limits_{i=1}^{m}{\\alpha^{(i)}}\\\\\n",
"\\text{여기서 } \\alpha^{(i)} \\ge 0 \\quad i = 1, 2, \\dots, m \\text{ 일 때}\n", "\\text{여기서 } \\alpha^{(i)} \\ge 0 \\quad i = 1, 2, \\dots, m \\text{ 일 때}\n",
@@ -1184,13 +1184,13 @@
"$ \\hat{b} $\n", "$ \\hat{b} $\n",
"\n", "\n",
"\n", "\n",
"$ \\hat{b} = 1 - t^{(k)}({\\hat{\\mathbf{w}}}^T \\cdot \\mathbf{x}^{(k)}) $\n", "$ \\hat{b} = t^{(k)} - ({\\hat{\\mathbf{w}}}^T \\mathbf{x}^{(k)}) $\n",
"\n", "\n",
"\n", "\n",
"**식 C-5: 쌍대 형식을 사용한 편향 추정**\n", "**식 C-5: 쌍대 형식을 사용한 편향 추정**\n",
"\n", "\n",
"$\n", "$\n",
"\\hat{b} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left[t^{(i)} - {\\hat{\\mathbf{w}}}^T \\cdot \\mathbf{x}^{(i)}\\right]}\n", "\\hat{b} = \\dfrac{1}{n_s}\\sum\\limits_{\\scriptstyle i=1 \\atop {\\scriptstyle {\\hat{\\alpha}}^{(i)} > 0}}^{m}{\\left[t^{(i)} - {\\hat{\\mathbf{w}}}^T \\mathbf{x}^{(i)}\\right]}\n",
"$" "$"
] ]
}, },

View File

@@ -3,14 +3,18 @@
# #
# Then you probably want to work in a virtualenv (optional): # Then you probably want to work in a virtualenv (optional):
# $ sudo pip install --upgrade virtualenv # $ sudo pip install --upgrade virtualenv
# Or if you prefer you can install virtualenv using your favorite packaging system. E.g., in Ubuntu: # Or if you prefer you can install virtualenv using your favorite packaging
# system. E.g., in Ubuntu:
# $ sudo apt-get update && sudo apt-get install virtualenv # $ sudo apt-get update && sudo apt-get install virtualenv
# Then: # Then:
# $ cd $my_work_dir # $ cd $my_work_dir
# $ virtualenv my_env # $ virtualenv my_env
# $ . my_env/bin/activate # $ . my_env/bin/activate
# #
# Next, optionally uncomment the OpenAI gym lines (see below). If you do, make sure to install the dependencies first. # Next, optionally uncomment the OpenAI gym lines (see below).
# If you do, make sure to install the dependencies first.
# If you are interested in xgboost for high performance Gradient Boosting, you
# should uncomment the xgboost line (used in the ensemble learning notebook).
# #
# Then install these requirements: # Then install these requirements:
# $ pip install --upgrade -r requirements.txt # $ pip install --upgrade -r requirements.txt
@@ -19,26 +23,65 @@
# $ jupyter notebook # $ jupyter notebook
# #
##### Core scientific packages
jupyter==1.0.0 jupyter==1.0.0
matplotlib==2.0.2 matplotlib==2.2.2
numexpr==2.6.3 numpy==1.14.3
numpy==1.13.1 pandas==0.22.0
pandas==0.20.3 scipy==1.1.0
Pillow==4.2.1
protobuf==3.4.0
psutil==5.3.1 ##### Machine Learning packages
scikit-learn==0.19.0 scikit-learn==0.19.1
scipy==0.19.1
sympy==1.1.1 # Optional: the XGBoost library is only used in the ensemble learning chapter.
tensorflow==1.3.0 #xgboost==0.71
##### Deep Learning packages
# Replace tensorflow with tensorflow-gpu if you want GPU support. If so,
# you need a GPU card with CUDA Compute Capability 3.0 or higher support, and
# you must install CUDA, cuDNN and more: see tensorflow.org for the detailed
# installation instructions.
tensorflow==1.8.0
#tensorflow-gpu==1.8.0
# Forcing bleach to 1.5 to avoid version incompatibility when installing
# TensorBoard.
bleach==1.5.0
Keras==2.1.6
# Optional: OpenAI gym is only needed for the Reinforcement Learning chapter. # Optional: OpenAI gym is only needed for the Reinforcement Learning chapter.
# There are a few dependencies you need to install first, check out: # There are a few dependencies you need to install first, check out:
# https://github.com/openai/gym#installing-everything # https://github.com/openai/gym#installing-everything
#gym[all]==0.9.3 #gym[all]==0.10.5
# If you only want to install the Atari dependency, uncomment this line instead: # If you only want to install the Atari dependency, uncomment this line instead:
#gym[atari]==0.9.3 #gym[atari]==0.10.5
##### Image manipulation
imageio==2.3.0
Pillow==5.1.0
scikit-image==0.13.1
##### Extra packages (optional)
# Nice utility to diff Jupyter Notebooks.
nbdime==0.4.1
# May be useful with Pandas for complex "where" clauses (e.g., Pandas
# tutorial).
numexpr==2.6.5
# These libraries can be useful in the classification chapter, exercise 4.
nltk==3.3
urlextract==0.8.3
# Optional: these are useful Jupyter extensions, in particular to display # Optional: these are useful Jupyter extensions, in particular to display
# the table of contents. # the table of contents.
https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tarball/master jupyter-contrib-nbextensions==0.5.0