There are several pretty interesting yes or no questions about simplex method in linear programming. All of them are chosen from the book in the references(actually it is a part of my homework). Although simplex method is very popular and lots of people know how to solve LP via simplex, we still may forget or omit some details of this theory. That is why I think the following questions are interesting. For these questions, we consider a LP to maximize $\textbf{cx}$ subject to $\textbf{x}\in X=\{\textbf{x}:\textbf{Ax}=\textbf{b},\textbf{x}\geq\textbf{0}\}$, where $\textbf{A}$ is $m\times n$ of rank $m.

1 Let $\bar{\textbf{x}}$ be a feasible solution with exactly $m$ positive component. Then $\bar{\textbf{x}}$ is an extreme point of $X$.

Apart from lasso algorithm, there are also many extensions of lasso regression like adaptive lasso and elastic net.

Adaptive lasso is one of the extensions of lasso. It uses a weighted penalty of the form, that is

$\hat{\beta}=\arg\min||y-\sum\limits_{j=1}^px_j\beta_j||^2+\lambda \sum\limits_{j=1}^pw_j|\beta_j|$

where $w_j=1/|\beta_j|^{\nu}(\nu>0)$, and $\beta_j$ is the OLS estimator. As a matter of fact, it is clear that both lasso and ridge regression are the particular cases of adaptive lasso. Continue reading

Gaussian Processes is a kind of Bayesian methods in machine learning. Actually the most significant difference between classical algorithm is that Bayesian methods always do not have to make the "best guess"  predictions for new test points. Instead, Bayesian methods always just compute a posterior predictive distributions for the new test inputs. That is, Bayesian algorithms are able to provide a good way to quantify the uncertainty in model estimates, but classical methods cannot in most time.

1 Multivariate Gaussian Distributions

Autumn is a beautiful season, so it is a pretty popular choice to go outside and enjoy the scenery at fall break. Before I arrived in Blacksburg, I have heard that Blue Ridge Mountains are very beautiful and it is worthy to go hiking there. Luckily,  it is not very far from Blacksburg. Although the distance is more than 8 miles and all of us felt extremely tired, I was still very happy. Since my English is not good enough, I cannot express the fantastic landscape in English, so I just want to share some photos here, although the quality of this photos is not very good.

Matrix Decomposition (or factorization) is pretty important in many research areas, especially in data analysis, such as using SVD or EVD in PCA. Actually there are more than 10 kinds of matrix decomposition methods.  In general, researchers divide these methods into 4 types, diagonal factorization (like SVD), triangularization factorization (like LU), triangle-diagonal decomposition (like schur decomposition) and tri-diagonal decomposition. Here the triangularization factorization is just discussed first.

1 Cholesky factorization

When I got my diploma and bachelor's degree certificates,  I realized that I had graduated from Zhejiang University. Although I  encountered numerous difficulties during these 4 years, I still felt very happy and satisfied. Anyway, the most important goal has been achieved at least. Then I should set some new goals in next several years, and I have to try my best to complete my PhD study at Virginia tech.

Now I have arrived in Blacksburg which is a very beautiful town in Virginia. It is surrounded by mountains and big trees, so the sky is pretty blue and the air is very fresh. Additionally, of course the campus of VT is also impressive and the local residents are very nice. Despite all this, I still have trouble adjusting to life here. For instance, my spoken English is not very OK and I am not familiar with the traffic rules of United States, but anyway, I believe I will overcome the troubles and have a great time in the future. Thank all the person who gives me help, who cares for me. Thank you very much. Continue reading