1 Cauchy-Schwartz Inequality & Jensen's Inequality

These two inequalities are widely applied in lots of areas not only probability. For probability, they can be written as follow: (1) Cauchy-Schwartz Inequality, if X and Y have finite variances, then

E[XY]\leq\sqrt{E[X^2]E[Y^2]}

(2) Jensen's Inequality: If g is a convex function, then

Eg(X)\geq g(EX)

2 Markov Inequalities 

Markov's inequality is pretty famous and basic, so firstly recall this inequality as follow: For any random variable X\geq 0 and any positive function f for X:

Continue reading

There are several pretty interesting yes or no questions about simplex method in linear programming. All of them are chosen from the book in the references(actually it is a part of my homework). Although simplex method is very popular and lots of people know how to solve LP via simplex, we still may forget or omit some details of this theory. That is why I think the following questions are interesting. For these questions, we consider a LP to maximize \textbf{cx} subject to \textbf{x}\in X=\{\textbf{x}:\textbf{Ax}=\textbf{b},\textbf{x}\geq\textbf{0}\}, where \textbf{A} is m\times n of rank m<n.

1 Let \bar{\textbf{x}} be a feasible solution with exactly m positive component. Then \bar{\textbf{x}} is an extreme point of X.

Continue reading

Apart from lasso algorithm, there are also many extensions of lasso regression like adaptive lasso and elastic net.

3.1 Adaptive lasso 

Adaptive lasso is one of the extensions of lasso. It uses a weighted penalty of the form, that is

\hat{\beta}=\arg\min||y-\sum\limits_{j=1}^px_j\beta_j||^2+\lambda \sum\limits_{j=1}^pw_j|\beta_j|

where w_j=1/|\beta_j|^{\nu}(\nu>0), and \beta_j is the OLS estimator. As a matter of fact, it is clear that both lasso and ridge regression are the particular cases of adaptive lasso. Continue reading

Gaussian Processes is a kind of Bayesian methods in machine learning. Actually the most significant difference between classical algorithm is that Bayesian methods always do not have to make the "best guess"  predictions for new test points. Instead, Bayesian methods always just compute a posterior predictive distributions for the new test inputs. That is, Bayesian algorithms are able to provide a good way to quantify the uncertainty in model estimates, but classical methods cannot in most time.

1 Multivariate Gaussian Distributions

Continue reading

Autumn is a beautiful season, so it is a pretty popular choice to go outside and enjoy the scenery at fall break. Before I arrived in Blacksburg, I have heard that Blue Ridge Mountains are very beautiful and it is worthy to go hiking there. Luckily,  it is not very far from Blacksburg. Although the distance is more than 8 miles and all of us felt extremely tired, I was still very happy. Since my English is not good enough, I cannot express the fantastic landscape in English, so I just want to share some photos here, although the quality of this photos is not very good.

2014-10-11 18.12.51

Continue reading

Matrix Decomposition (or factorization) is pretty important in many research areas, especially in data analysis, such as using SVD or EVD in PCA. Actually there are more than 10 kinds of matrix decomposition methods.  In general, researchers divide these methods into 4 types, diagonal factorization (like SVD), triangularization factorization (like LU), triangle-diagonal decomposition (like schur decomposition) and tri-diagonal decomposition. Here the triangularization factorization is just discussed first.

1 Cholesky factorization

Continue reading

next posts >>