In recent years, the availability of high-throughput data from genomic, finance, environmental, marketing (among other) applications has created an urgent need for methodology and tools for analyzing high-dimensional data. The explosion of data, due to advances in science and information technology, has left almost no field untouched. It is not uncommon to see datasets of sizes in terabytes and/or with millions of covariates. The field of non-parametric and semi- parametric statistics is quickly evolving and expanding to adapt to the challenges posed by big data. A variety of new techniques have been developed. Some of these techniques leverage the recent developments in the parallel and distributed computing, while others rely on appropriate sparsity assumptions to reduce the effective number of covariates. In addition, recent work has shown that many of these methodologies drastically reduce computation time while retaining the optimality properties of the standard non-parametric and semiparametric methods. Despite important breakthroughs in the last few years, a lot of unanswered questions remain. In an effort to address this, the proposed workshop will bring together individuals who have done path-breaking work in this field.