Snowflake separates the storage and data computation layers. Between the storage and data consumption layers there is a metadata handling layer, which is the most important part of the Snowflake system. Metadata layer knows the data in the storage and it optimizes the queries, it also handles the security services.
Conventional data warehouses and big data solutions increasingly struggle to deliver on their fundamental purpose: to make it easy to amass all your data, enable rapid analytics, and quickly make data insights available to all of your users, consumers and systems that need them.
To solve that, Snowflake built a new SQL data warehouse from the ground up for the cloud, one designed with a patented new architecture to handle today’s and tomorrow’s data and analytics. The result? A data warehouse that delivers performance, simplicity, concurrency and affordability not possible with other data analytics platforms.
Snowflake is built for speed for even the most intense workloads. Its patented architecture separates compute from storage so you can scale compute up and down, and on the fly, without delay or disruption. You get the performance you need exactly when you need it.
Snowflake is also a fully columnar database with vectorized execution, making it capable of crunching demanding analytic workloads.
Snowflake’s adaptive optimization makes sure that queries automatically get the best performance possible – no indexes, distribution keys, sort keys, or tuning parameters to manage.