CSE 232A Database System Implementation Arun Kumar Topic 8: Data - - PowerPoint PPT Presentation

cse 232a database system implementation
SMART_READER_LITE
LIVE PREVIEW

CSE 232A Database System Implementation Arun Kumar Topic 8: Data - - PowerPoint PPT Presentation

CSE 232A Database System Implementation Arun Kumar Topic 8: Data Systems for ML Workloads Book: Data Management in ML Systems by Morgan & Claypool Publishing 1 Big Data Systems Parallel RDBMSs and Cloud-Native RDBMSs


slide-1
SLIDE 1

Topic 8: Data Systems for ML Workloads Book: “Data Management in ML Systems” by Morgan & Claypool Publishing

Arun Kumar

1

CSE 232A
 Database System Implementation

slide-2
SLIDE 2

2

“Big Data” Systems

❖ Parallel RDBMSs and Cloud-Native RDBMSs ❖ Beyond RDBMSs: A Brief History ❖ “Big Data” Systems ❖ The MapReduce/Hadoop Craze ❖ Spark and Other Dataflow Systems ❖ Key-Value NoSQL Systems ❖ Graph Processing Systems ❖ Advanced Analytics/ML Systems

slide-3
SLIDE 3

3

Lifecycle/Tasks of ML-based Analytics

Data acquisition Data preparation Feature Engineering Training Model Selection Inference Monitoring

slide-4
SLIDE 4

4

ML 101: Popular Forms of ML

Generalized Linear Models (GLMs); from statistics Bayesian Networks; inspired by causal reasoning Decision Tree-based: CART, Random Forest, Gradient- Boosted Trees (GBT), etc.; inspired by symbolic logic Support Vector Machines (SVMs); inspired by psychology Artificial Neural Networks (ANNs): Multi-Layer Perceptrons (MLPs), Convolutional NNs (CNNs), Recurrent NNs (RNNs), Transformers, etc.; inspired by brain neuroscience

slide-5
SLIDE 5

5

Advanced Analytics/ML Systems

❖ A data processing system (aka data system) for mathematically advanced data analysis ops (inferential or predictive), i.e., beyond just SQL aggregates ❖ Statistical analysis; ML, deep learning (DL); data mining (domain-specific applied ML + feature eng.) ❖ High-level APIs for expressing statistical/ML/DL computations over large datasets Q: What is a Machine Learning (ML) System?

slide-6
SLIDE 6

6

Data Management Concerns in ML

Q: How do “ML Systems” relate to ML? ML Systems : ML :: Computer Systems : TCS

Key concerns in ML: Accuracy Runtime efficiency (sometimes) Additional key practical concerns in ML Systems: Scalability (and efficiency at scale) Usability Manageability Developability Q: What if the dataset is larger than single-node RAM? Q: How are the features and models configured? Q: How does it fit within production systems and workflows? Q: How to simplify the implementation of such systems? Long-standing concerns in the DB systems world! Can often trade off accuracy a bit to gain on the rest!

slide-7
SLIDE 7

7

Conceptual System Stack Analogy

Program Specification Declarative Query Language Execution Primitives Parallel Relational Operator Dataflows Program Modification Query Optimization TensorFlow? R? Scikit-learn? Hardware CPU, GPU, FPGA, NVM, RDMA, etc. ??? Depends on ML Algorithm Program Formalism Relational Algebra Matrix Algebra Gradient Descent Relational DB Systems ML Systems Theory First-Order Logic Complexity Theory Learning Theory Optimization Theory

slide-8
SLIDE 8

8

Categorizing ML Systems

❖ Orthogonal Dimensions of Categorization:

  • 1. Scalability: In-memory libraries vs Scalable ML

system (works on larger-than-memory datasets)

  • 2. Target Workloads: General ML library vs Decision

tree-oriented vs Deep learning, etc.

  • 3. Implementation Reuse: Layered on top of scalable

data system vs Custom from-scratch framework

slide-9
SLIDE 9

9

Major Existing ML Systems

General ML libraries: Disk-based files: In-memory: Layered on RDBMS/Spark: Cloud-native: “AutoML” platforms: Decision tree-oriented: Deep learning-oriented:

slide-10
SLIDE 10

10

ML as Numeric Optimization

❖ Recall that an ML model is a parametric function: L(W) =

n

X

i=1

l(yi, f(W, xi))

<latexit sha1_base64="2xHoHxDTpYj9FZ94WjRGA4eUSI=">ACIXicbVBNSwJBGJ61L7OvrY5dhiRQENntg7wIUpcOHQxSA9eW2XFWB2dnl5nZSBb/Spf+SpcORXiL/kyj7sG0BwYenud5mfd9vIhRqSzr28isrK6tb2Q3c1vbO7t75v5BU4axwKSBQxaKBw9JwignDUVIw+RICjwGl5g+uJ3oiQtKQ36thRDoB6nHqU4yUlyzcltwAqT6np+0RkVYhY6MAzehVXv0yCGDhaFLS9CfC5Xgs0uLRdfMW2VrCrhM7JTkQYq6a46dbojgHCFGZKybVuR6iRIKIoZGeWcWJI4QHqkbamHAVEdpLphSN4opUu9EOhH1dwqs5PJCiQch4OjlZVC56E/E/rx0rv9JKI9iRTiefeTHDKoQTuqCXSoIVmyoCcKC6l0h7iOBsNKl5nQJ9uLJy6R5WrbPyhd35/naVpHFhyBY1ANrgENXAD6qABMHgBb+ADfBqvxrvxZYxn0YyRzhyCPzB+fgE0I6Gu</latexit>

❖ Training: Process of fitting model parameters from data ❖ Training can be expressed in this form for many ML models; aka “empirical risk minimization” (ERM) aka “loss” function:

f : DW × DX → DY

<latexit sha1_base64="3nwsg8hxgtnGtmOBHQ5mjABxB8g=">ACMnicbVDLSgMxFM34rPU16tJNsAiuyowPFdFXeiugn1Ip5RMmlDM5khuaOUod/kxi8RXOhCEbd+hOm0C9t6IHA4515y7vFjwTU4zps1N7+wuLScW8mvrq1vbNpb21UdJYqyCo1EpOo+0UxwySrAQbB6rBgJfcFqfu9y6NcemNI8knfQj1kzJB3JA04JGKl3wTn2AsJdCkR6dWglXE/SGsD7AEPmZ5w69hTvNMFolT0OHc45ZdcIpOBjxL3DEpoDHKLfvFa0c0CZkEKojWDdeJoZkSBZwKNsh7iWYxoT3SYQ1DJTFpml28gDvG6WNg0iZJwFn6t+NlIRa90PfTA5T6mlvKP7nNRIzpopl3ECTNLR0EiMER42B9uc8UoiL4hCpusmLaJYpQMC3nTQnu9MmzpHpYdI+KJ7fHhdLFuI4c2kV76AC56BSV0DUqowqi6Am9og/0aT1b79aX9T0anbPGOztoAtbPLyQXq1A=</latexit>

❖ l() is a differentiable function; can be compositions ❖ GLMs, linear SVMs, and ANNs fit the above template

is a training example

(xi, yi)

<latexit sha1_base64="RqFgvBkpZLuXZDIZXEvdMwy9X6U=">AB8XicbVDLSsNAFL2pr1pfVZduBotQUriA10W3bisYB/YhjCZTtqhk0mYmYgh9C/cuFDErX/jzr9x2mahrQcuHM65l3v8WPOlLbtb6uwtLyulZcL21sbm3vlHf3WipKJKFNEvFIdnysKGeCNjXTnHZiSXHoc9r2RzcTv/1IpWKRuNdpTN0QDwQLGMHaSA/VJ4+doNRjx165YtfsKdAicXJSgRwNr/zV60ckCanQhGOluo4dazfDUjPC6bjUSxSNMRnhAe0aKnBIlZtNLx6jI6P0URBJU0Kjqfp7IsOhUmnom84Q6Ga9ybif1430cGVmzERJ5oKMlsUJBzpCE3eR30mKdE8NQTycytiAyxESbkEomBGf+5UXSOq05Z7WLu/NK/TqPowgHcAhVcOAS6nALDWgCAQHP8ApvlrJerHfrY9ZasPKZfgD6/MHRuCQAw=</latexit>
slide-11
SLIDE 11

11

Key Algo. For ML: Gradient Descent

❖ Goal of training is to find minimizer: ❖ Not possible to solve for optimal in closed form usually ❖ Gradient Descent (GD) is an iterative procedure to get to an optimal solution: W∗ = argmin L(W)

<latexit sha1_base64="3M40XHUnyANhElUSxOkfbGUhArM=">ACFnicbVDLSgMxFM3UV62vUZdugkWogmXGB7oRim5cuKhgH9DWkzbWgmMyQZoQzjT7jxV9y4UMStuPNvzEwH1NYDgZNz7k3uPU7AqFSW9WXkZmbn5hfyi4Wl5ZXVNXN9oy79UGBSwz7zRdNBkjDKSU1RxUgzEAR5DiMNZ3iR+I07IiT1+Y0aBaTjoT6nLsVIalr7rc9pAaOGzXi2z14BtMrVRESfY/y+B5elX4qdrtm0SpbKeA0sTNSBmqXfOz3fNx6BGuMENStmwrUB39uqKYkbjQDiUJEB6iPmlpypFHZCdK14rhjlZ60PWFPlzBVP3dESFPypHn6MpkRDnpJeJ/XitU7mknojwIFeF4/JEbMqh8mGQEe1QrNhIE4QF1bNCPEACYaWTLOgQ7MmVp0n9oGwflo+vj4qV8yOPNgC26AEbHACKuASVENYPAnsALeDUejWfjzXgfl+aMrGcT/IHx8Q0AEJ9H</latexit>

Gradient W∗

<latexit sha1_base64="JmrhwCpR7Hpyf9T6lKe/X5Nc4s=">AB83icbVDLSgMxFL3js9ZX1aWbYBHERZnxgS6LblxWsA/ojCWTZtrQJDMkGaEM/Q03LhRx68+482/MtLPQ1gOBwzn3ck9OmHCmjet+O0vLK6tr6WN8ubW9s5uZW+/peNUEdokMY9VJ8SaciZp0zDaSdRFIuQ03Y4us39hNVmsXywYwTGg8kCxiBsr+b7AZhGWXvyeNqrVN2aOwVaJF5BqlCg0at8+f2YpIJKQzjWu5iQkyrAwjnE7KfqpgskID2jXUokF1UE2zTxBx1bpoyhW9kmDpurvjQwLrcitJN5Rj3v5eJ/Xjc10XWQMZmkhkoyOxSlHJkY5QWgPlOUGD62BPFbFZEhlhYmxNZVuCN/lRdI6q3ntcv7i2r9pqijBIdwBCfgwRXU4Q4a0AQCTzDK7w5qfPivDsfs9Elp9g5gD9wPn8A7g2Rng=</latexit>

W

<latexit sha1_base64="WpYihnUfp16/nQ4t+mDG09yA8hg=">AB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsz4QJdFNy4r2Ae2pWTSO21oJjMkGaEM/Qs3LhRx69+482/MtLPQ1gOBwzn3knOPHwujet+O4WV1bX1jeJmaWt7Z3evH/Q1FGiGDZYJCLV9qlGwSU2DcC27FCGvoCW/74NvNbT6g0j+SDmcTYC+lQ8oAzaqz02A2pGflB2pr2yxW36s5AlomXkwrkqPfLX91BxJIQpWGCat3x3Nj0UqoMZwKnpW6iMaZsTIfYsVTSEHUvnSWekhOrDEgQKfukITP190ZKQ60noW8ns4R60cvE/7xOYoLrXsplnBiUbP5RkAhiIpKdTwZcITNiYglitushI2oszYkq2BG/x5GXSPKt659XL+4tK7SavowhHcAyn4MEV1OAO6tABhKe4RXeHO28O/Ox3y04OQ7h/AHzucPzViRAg=</latexit>

L(W)

<latexit sha1_base64="3MgH+mIhiOohql8Mc0t7ZQ5X0QU=">AB9HicbVDLSsNAFL2pr1pfVZduBotQNyXxgS6Lbly4qGAf0IYymU7aoZNJnJkUSsh3uHGhiFs/xp1/46TNQlsPDBzOuZd75ngRZ0rb9rdVWFldW98obpa2tnd298r7By0VxpLQJgl5KDseVpQzQZuaU47kaQ48Dhte+PbzG9PqFQsFI96GlE3wEPBfEawNpJ7X+0FWI8P2mnp/1yxa7ZM6Bl4uSkAjka/fJXbxCSOKBCE46V6jp2pN0ES80Ip2mpFysaYTLGQ9o1VOCAKjeZhU7RiVEGyA+leUKjmfp7I8GBUtPAM5NZRLXoZeJ/XjfW/rWbMBHFmgoyP+THOkQZQ2gAZOUaD41BPJTFZERlhiok1PJVOCs/jlZdI6qzntcuHi0r9Jq+jCEdwDFVw4ArqcAcNaAKBJ3iGV3izJtaL9W59zEcLVr5zCH9gf4AM7eRvQ=</latexit>

W(0)

<latexit sha1_base64="S+keXcWMFf6wrOJ7MBq7Rng6AM=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVecs69sVp+bMgJaJW5AKFGj07a/eICJSIUmHCvVdZ1YeymWmhFOs3IvUTGZIyHtGuowCFVXjpLnqFTowxQEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDGy9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qVw+XlfptUcJjuEquDCNdThHhrQBAITeIZXeLNS68V6tz7moytWsXMEf2B9/gAKpNG</latexit>

W(1)

<latexit sha1_base64="FI2/tJBTSmjc6HdcV6spCWeSwCc=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVfcs69sVp+bMgJaJW5AKFGj07a/eICJSIUmHCvVdZ1YeymWmhFOs3IvUTGZIyHtGuowCFVXjpLnqFTowxQEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDGy9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qVw+XlfptUcJjuEquDCNdThHhrQBAITeIZXeLNS68V6tz7moytWsXMEf2B9/gAMLJNH</latexit>

❖ Each iteration/pass aka epoch akin to a SQL aggregate! ❖ Typically, multiple of epochs needed for convergence

rL(W) =

n

X

i=1

rl(yi, f(W, xi))

<latexit sha1_base64="y8dF+HfOIec6uWLjxK5GF482Lrc=">ACLnicbVDLSgMxFM3UV62vUZdugkVoZQZH+imUBTBhYsKthU6dcikmTY0kxmSjFiGfpEbf0UXgoq49TNM21nU1gOBwznknuPFzEqlW9G5mFxaXlexqbm19Y3PL3N5pyDAWmNRxyEJx5yFJGOWkrqhi5C4SBAUeI02vfzHymw9ESBryWzWISDtAXU59ipHSkmteOhx5DMFrWHACpHqenzSHRViBjowDN6EVe3jPYRpihYFLS9CfipYeXVosumbeKltjwHlipyQPUtRc89XphDgOCFeYISlbthWpdoKEopiRYc6JYkQ7qMuaWnKUBkOxmfO4QHWulAPxT6cQXH6vREgIpB4Gnk6M95aw3Ev/zWrHyz9oJ5VGsCMeTj/yYQRXCUXewQwXBig0QVhQvSvEPSQVrhnC7Bnj15njQOy/ZR+eTmOF89T+vIgj2wDwrABqegCq5ADdQBk/gBXyAT+PZeDO+jO9JNGOkM7vgD4yfX4PspwQ=</latexit>

Update rule at iteration t: Learning rate hyper-parameter

W(t+1) W(t) ηrL(W(t))

<latexit sha1_base64="Jl05J1KClG7uTW4dnf4TNEBhksk=">ACPHicbVBNSyNBFOxR14/4sVGPXhqDEFkM6iR9GLBw+KxgiZGN503mhjT8/Q/UYJQ36Yl/0Re/PkxYMiXj3biTmscQsaiqp69HsVZUpa8v0Hb2x84sfk1PRMaXZufuFneXHp3Ka5EVgXqUrNRQWldRYJ0kKLzKDkEQKG9HNQd9v3KxMtVn1M2wlcCVlrEUQE5ql0/DBOg6iotG7KoEv/Fg/UeDxXGBMakd/yr7wNHiIBDzVECvgRr4m1tvlil/zB+DfSTAkFTbEcbv8N+ykIk9Qk1BgbTPwM2oVYEgKhb1SmFvMQNzAFTYd1ZCgbRWD43t8zSkdHqfGPU18oP47UBibTeJXLK/qB31+uL/vGZO8W6rkDrLCbX4/CjOFaeU95vkHWlQkOo6AsJItysX12BAkOu75EoIRk/+Ts43a8Hv2vbJVmVvf1jHNFthq6zKArbD9tghO2Z1Jtg9e2TP7MX74z15r97bZ3TMG84sy/w3j8AYsatA=</latexit>
slide-12
SLIDE 12

12

GD vs Stochastic GD (SGD)

❖ Disadvantages of GD: ❖ An update needs pass over whole dataset; inefficient for large datasets (> millions of examples) ❖ Gets us only to a local optimal; ANNs are “non-convex” ❖ Stochastic GD (SGD) resolves above issues: ❖ Popular for large-scale ML, especially ANNs/deep learning ❖ Updates based on samples aka mini-batches; batch sizes: 10s to 1000s; OOM more updates per epoch than GD!

W(t+1) W(t) ηr˜ L(W(t))

<latexit sha1_base64="UQEnrJTW1ICg3Ivhlxip9zAg4io=">ACRHicbVDBTtAFyHUmhKW1OvawaVQpCjewCao+oXHrgABIhSHaInjfPsGK9tnafiyLH8elH8CNL+ilh6Kq16rkAMkjLTa0cw87dtJCiUtBcGt1p6tvx8ZfVF+Xaq9dv/PW3JzYvjcC+yFVuThOwqKTGPklSeFoYhCxROEgu9xt/8B2Nlbk+pkmBwzOtUylAHLSyI/iDOgiSatBfVZ1iW/xcLPmscKUwJj8ij/2nfeRx0jAYw2JchdJNcbqoObd+eTmyO8EvWAKvkjCGemwGQ5H/k08zkWZoSahwNoDAoaVmBICoV1Oy4tFiAu4RwjRzVkaIfVtISaf3DKmKe5cUcTn6oPJyrIrJ1kiUs2i9p5rxGf8qKS0i/DSuqiJNTi/qG0VJxy3jTKx9KgIDVxBISRblcuLsCAINd725UQzn95kZx86oXbvd2jnc7e1kdq+wde8+6LGSf2R7xg5Znwl2zX6y3+zO+H98v54f+jLW82s8Eewfv3H7dAsLY=</latexit>

r˜ L(W) = X

i∈B

rl(yi, f(W, xi))

<latexit sha1_base64="9NcFodfWR8o3UOsWYu1349rFPk=">ACOXicbVDLSsNAFJ34rPVdenmYhFakJL4QDdCqRsXLirYVmhKmEwnOnQyCTMTsYT8lhv/wp3gxoUibv0Bp20W9XFg4HDOucy9x485U9q2n62Z2bn5hcXCUnF5ZXVtvbSx2VZRIgltkYhH8trHinImaEszel1LCkOfU47/uBs5HfuqFQsEld6GNeiG8ECxjB2kheqekK7HMrma8T9OLDCpuiPWtH6SdrAqn4Kok9FIGLhPQyCP8rQY3sQTIX34N5j1apXKts1ewz4S5yclFGOpld6cvsRSUIqNOFYqa5jx7qXYqkZ4TQruomiMSYDfEO7hgocUtVLx5dnsGuUPgSRNE9oGKvTEykOlRqGvkmOFlW/vZH4n9dNdHDS5mIE0FmXwUJBx0BKMaoc8kJZoPDcFEMrMrkFsMdGm7KIpwfl98l/S3q85B7Wjy8NyvZHXUDbaAdVkIOUR2doyZqIYIe0At6Q+/Wo/VqfVifk+iMlc9soR+wvr4Bbvircw=</latexit>

Sample mini-batch from dataset without replacement

slide-13
SLIDE 13

13

Data Access Pattern of SGD

W(t+1) W(t) ηr˜ L(W(t))

<latexit sha1_base64="UQEnrJTW1ICg3Ivhlxip9zAg4io=">ACRHicbVDBTtAFyHUmhKW1OvawaVQpCjewCao+oXHrgABIhSHaInjfPsGK9tnafiyLH8elH8CNL+ilh6Kq16rkAMkjLTa0cw87dtJCiUtBcGt1p6tvx8ZfVF+Xaq9dv/PW3JzYvjcC+yFVuThOwqKTGPklSeFoYhCxROEgu9xt/8B2Nlbk+pkmBwzOtUylAHLSyI/iDOgiSatBfVZ1iW/xcLPmscKUwJj8ij/2nfeRx0jAYw2JchdJNcbqoObd+eTmyO8EvWAKvkjCGemwGQ5H/k08zkWZoSahwNoDAoaVmBICoV1Oy4tFiAu4RwjRzVkaIfVtISaf3DKmKe5cUcTn6oPJyrIrJ1kiUs2i9p5rxGf8qKS0i/DSuqiJNTi/qG0VJxy3jTKx9KgIDVxBISRblcuLsCAINd725UQzn95kZx86oXbvd2jnc7e1kdq+wde8+6LGSf2R7xg5Znwl2zX6y3+zO+H98v54f+jLW82s8Eewfv3H7dAsLY=</latexit>

r˜ L(W) = X

i∈B

rl(yi, f(W, xi))

<latexit sha1_base64="9NcFodfWR8o3UOsWYu1349rFPk=">ACOXicbVDLSsNAFJ34rPVdenmYhFakJL4QDdCqRsXLirYVmhKmEwnOnQyCTMTsYT8lhv/wp3gxoUibv0Bp20W9XFg4HDOucy9x485U9q2n62Z2bn5hcXCUnF5ZXVtvbSx2VZRIgltkYhH8trHinImaEszel1LCkOfU47/uBs5HfuqFQsEld6GNeiG8ECxjB2kheqekK7HMrma8T9OLDCpuiPWtH6SdrAqn4Kok9FIGLhPQyCP8rQY3sQTIX34N5j1apXKts1ewz4S5yclFGOpld6cvsRSUIqNOFYqa5jx7qXYqkZ4TQruomiMSYDfEO7hgocUtVLx5dnsGuUPgSRNE9oGKvTEykOlRqGvkmOFlW/vZH4n9dNdHDS5mIE0FmXwUJBx0BKMaoc8kJZoPDcFEMrMrkFsMdGm7KIpwfl98l/S3q85B7Wjy8NyvZHXUDbaAdVkIOUR2doyZqIYIe0At6Q+/Wo/VqfVifk+iMlc9soR+wvr4Bbvircw=</latexit>

Sample mini-batch from dataset without replacement Original dataset Random “shuffle” Epoch 1

W(0)

<latexit sha1_base64="S+keXcWMFf6wrOJ7MBq7Rng6AM=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVecs69sVp+bMgJaJW5AKFGj07a/eICJSIUmHCvVdZ1YeymWmhFOs3IvUTGZIyHtGuowCFVXjpLnqFTowxQEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDGy9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qVw+XlfptUcJjuEquDCNdThHhrQBAITeIZXeLNS68V6tz7moytWsXMEf2B9/gAKpNG</latexit>
  • Seq. scan

W(1)

<latexit sha1_base64="FI2/tJBTSmjc6HdcV6spCWeSwCc=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVfcs69sVp+bMgJaJW5AKFGj07a/eICJSIUmHCvVdZ1YeymWmhFOs3IvUTGZIyHtGuowCFVXjpLnqFTowxQEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDGy9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qVw+XlfptUcJjuEquDCNdThHhrQBAITeIZXeLNS68V6tz7moytWsXMEf2B9/gAMLJNH</latexit>

W(2)

<latexit sha1_base64="EHwZoqY5j/U4nFUPxcanVrWIr8=">AB+XicbVDLSsNAFL2pr1pfUZduBotQNyWpi6LblxWsA9oa5lMJ+3QySTMTAol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpma04kKQ48Ttve5C7z21MqFQvFo5FtB/gkWA+I1gbaWDbvQDrsecn7fQpqdTO04FdqrOHGiVuDkpQ47GwP7qDUMSB1RowrFSXdeJdD/BUjPCaVrqxYpGmEzwiHYNFTigqp/Mk6fozChD5IfSPKHRXP29keBAqVngmcksp1r2MvE/rxtr/6afMBHFmgqyOTHOkQZTWgIZOUaD4zBPJTFZExlhiok1ZJVOCu/zlVdKqVd2L6tXDZbl+m9dRhBM4hQq4cA1uIcGNIHAFJ7hFd6sxHqx3q2PxWjByneO4Q+szx8NspNI</latexit>

Mini-batch 1 Mini-batch 2 Mini-batch 3

W(3)

<latexit sha1_base64="DOy/KUlhuZMwbREidWpApiAKXFQ=">AB+XicbVDLSsNAFL2pr1pfUZduBotQNyWxi6LblxWsA9oa5lMJ+3QySTMTAol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpma04kKQ48Ttve5C7z21MqFQvFo5FtB/gkWA+I1gbaWDbvQDrsecn7fQpqdTO04FdqrOHGiVuDkpQ47GwP7qDUMSB1RowrFSXdeJdD/BUjPCaVrqxYpGmEzwiHYNFTigqp/Mk6fozChD5IfSPKHRXP29keBAqVngmcksp1r2MvE/rxtr/6afMBHFmgqyOTHOkQZTWgIZOUaD4zBPJTFZExlhiok1ZJVOCu/zlVdK6qLq16tXDZbl+m9dRhBM4hQq4cA1uIcGNIHAFJ7hFd6sxHqx3q2PxWjByneO4Q+szx8POJNJ</latexit>

Epoch 2

  • Seq. scan

(Optional) New Random Shuffle

W(3)

<latexit sha1_base64="DOy/KUlhuZMwbREidWpApiAKXFQ=">AB+XicbVDLSsNAFL2pr1pfUZduBotQNyWxi6LblxWsA9oa5lMJ+3QySTMTAol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpma04kKQ48Ttve5C7z21MqFQvFo5FtB/gkWA+I1gbaWDbvQDrsecn7fQpqdTO04FdqrOHGiVuDkpQ47GwP7qDUMSB1RowrFSXdeJdD/BUjPCaVrqxYpGmEzwiHYNFTigqp/Mk6fozChD5IfSPKHRXP29keBAqVngmcksp1r2MvE/rxtr/6afMBHFmgqyOTHOkQZTWgIZOUaD4zBPJTFZExlhiok1ZJVOCu/zlVdK6qLq16tXDZbl+m9dRhBM4hQq4cA1uIcGNIHAFJ7hFd6sxHqx3q2PxWjByneO4Q+szx8POJNJ</latexit>

W(4)

<latexit sha1_base64="7HsMu7dOxTzZgQl2dq540qpd9U=">AB+XicbVDLSsNAFL2pr1pfUZduBotQNyXRi6LblxWsA9oa5lMJ+3QySTMTAol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpma04kKQ48Ttve5C7z21MqFQvFo5FtB/gkWA+I1gbaWDbvQDrsecn7fQpqdTO04FdqrOHGiVuDkpQ47GwP7qDUMSB1RowrFSXdeJdD/BUjPCaVrqxYpGmEzwiHYNFTigqp/Mk6fozChD5IfSPKHRXP29keBAqVngmcksp1r2MvE/rxtr/6afMBHFmgqyOTHOkQZTWgIZOUaD4zBPJTFZExlhiok1ZJVOCu/zlVdK6qLqX1auHWrl+m9dRhBM4hQq4cA1uIcGNIHAFJ7hFd6sxHqx3q2PxWjByneO4Q+szx8QvpNK</latexit>

… … ORDER BY RAND() Randomized dataset

slide-14
SLIDE 14

14

❖ An SGD epoch is similar to SQL aggs but also different: ❖ More complex agg. state (running info): model param. ❖ Multiple mini-batch updates to model param. within a pass ❖ Sequential dependency across mini-batches in a pass! ❖ Need to keep track of model param. across epochs ❖ (Optional) New random shuffling before each epoch ❖ Not an algebraic aggregate; hard to parallelize! ❖ Not even commutative: different random shuffle orders give different results (very unlike relational ops)!

Data Access Pattern of SGD

Q: How to implement scalable SGD in an ML system?

W(t)

<latexit sha1_base64="jHixyQj+DLWH5ucHau8W5BCjQHQ=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVX2W9e2KU3NmQMvELUgFCjT69ldvEJEkpEITjpXquk6svRLzQinWbmXKBpjMsZD2jVU4JAqL50lz9CpUQYoiKR5QqOZ+nsjxaFS09A3k3lOtejl4n9eN9HBjZcyESeaCjI/FCQc6QjlNaABk5RoPjUE8lMVkRGWGKiTVlU4K7+OVl0jqvuRe1q4fLSv2qKMEx3ACVXDhGupwDw1oAoEJPMrvFmp9WK9Wx/z0RWr2DmCP7A+fwByPpOK</latexit>
slide-15
SLIDE 15

15

Bismarck: Single-Node Scalable SGD

❖ An SGD epoch runs in RDBMS process space using its User-Defined Aggregate (UDA) abstraction with 4 functions: ❖ Initialize: Run once; set up info; alloc. memory for ❖ Transition: Run per-tuple; compute gradient as running info; track mini-batches and update model param. ❖ Merge: Run per-worker at end; “combine” model param. ❖ Finalize: Run once at end; return final model param. ❖ Commands for shuffling, running multiple epochs, checking convergence, and validation/test error measurements issued from an external controller written in Python

https://adalabucsd.github.io/papers/2012_Bismarck_SIGMOD.pdf

W(t)

<latexit sha1_base64="jHixyQj+DLWH5ucHau8W5BCjQHQ=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVX2W9e2KU3NmQMvELUgFCjT69ldvEJEkpEITjpXquk6svRLzQinWbmXKBpjMsZD2jVU4JAqL50lz9CpUQYoiKR5QqOZ+nsjxaFS09A3k3lOtejl4n9eN9HBjZcyESeaCjI/FCQc6QjlNaABk5RoPjUE8lMVkRGWGKiTVlU4K7+OVl0jqvuRe1q4fLSv2qKMEx3ACVXDhGupwDw1oAoEJPMrvFmp9WK9Wx/z0RWr2DmCP7A+fwByPpOK</latexit>
slide-16
SLIDE 16

16

Combining SGD Model Parameters

❖ RDBMS takes care of scaling UDA code to larger-than-RAM data and distributed execution across workers ❖ Tricky part: How to “combine” model params. from workers, given that an SGD epoch is not an algebraic agg.? ❖ Master typically performs “Model Averaging”

Worker 1 Worker 2 Worker n … Master

W(t)

1

<latexit sha1_base64="DvIp/6uHDZOjNMUvnpHo6Owbk=">AB+3icbVDLSsNAFJ3UV62vWJdugkWom5L4QJdFNy4r2Ae0MUymk3boZBJmbsQS8ituXCji1h9x5984abPQ1gMDh3Pu5Z45fsyZAtv+Nkorq2vrG+XNytb2zu6euV/tqCiRhLZJxCPZ87GinAnaBgac9mJcehz2vUnN7nfaRSsUjcwzSmbohHgWMYNCSZ1YHIYaxH6Td7CGtw0nmOZ5Zsxv2DNYycQpSQwVanvk1GEYkCakAwrFSfceOwU2xBEY4zSqDRNEYkwke0b6mAodUuekse2Yda2VoBZHUT4A1U39vpDhUahr6ejJPqha9XPzP6ycQXLkpE3ECVJD5oSDhFkRWXoQ1ZJIS4FNMJFMZ7XIGEtMQNdV0SU4i19eJp3ThnPWuLg7rzWvizrK6BAdoTpy0CVqolvUQm1E0BN6Rq/ozciMF+Pd+JiPloxi5wD9gfH5A6OglC4=</latexit>

W(t)

2

<latexit sha1_base64="xRaCmja4+Alc2G0VSB2AwXG38es=">AB+3icbVDLSsNAFL3xWesr1qWbYBHqpiRV0WXRjcsK9gFtDJPpB06mYSZiVhCfsWNC0Xc+iPu/BsnbRbaemDgcM693DPHjxmVyra/jZXVtfWNzdJWeXtnd2/fPKh0ZJQITNo4YpHo+UgSRjlpK6oY6cWCoNBnpOtPbnK/+0iEpBG/V9OYuCEacRpQjJSWPLMyCJEa+0HazR7SmjrNvIZnVu26PYO1TJyCVKFAyzO/BsMIJyHhCjMkZd+xY+WmSCiKGcnKg0SGOEJGpG+phyFRLrpLHtmnWhlaAWR0I8ra6b+3khRKOU09PVknlQuern4n9dPVHDlpTHiSIczw8FCbNUZOVFWEMqCFZsqgnCguqsFh4jgbDSdZV1Cc7il5dJp1F3zuoXd+fV5nVRwmO4Bhq4MAlNOEWtAGDE/wDK/wZmTGi/FufMxHV4xi5xD+wPj8AaUklC8=</latexit>

W(t)

n

<latexit sha1_base64="dQYgF7AMPEB/nejv+XzaZ3+cDiQ=">AB+3icbVDLSsNAFJ3UV62vWpduBotQNyXxgS6LblxWsA9oY5lMJ+3QySTM3Igl5FfcuFDErT/izr9x0mahrQcGDufcyz1zvEhwDb9bRVWVtfWN4qbpa3tnd298n6lrcNYUdaioQhV1yOaCS5ZCzgI1o0UI4EnWMeb3GR+5EpzUN5D9OIuQEZSe5zSsBIg3KlHxAYe37SR+SGpykAyNW7bo9A14mTk6qKEdzUP7qD0MaB0wCFUTrnmNH4CZEAaeCpaV+rFlE6ISMWM9QSQKm3WSWPcXHRhliP1TmScAz9fdGQgKtp4FnJrOketHLxP+8Xgz+lZtwGcXAJ0f8mOBIcRZEXjIFaMgpoYQqrjJiumYKELB1FUyJTiLX14m7dO6c1a/uDuvNq7zOoroEB2hGnLQJWqgW9RELUTRE3pGr+jNSq0X6936mI8WrHznAP2B9fkDACOUaw=</latexit>

W(t) = 1 n

n

X

i=1

W(t)

i

<latexit sha1_base64="YvR+r5RoOYyp/ar/kI19j/9+94=">ACJ3icbVDLSgMxFM3UV62vqks3wSLUTZnxgW4qRTcuK9gHdNohk2ba0ExmSDJCfM3bvwVN4K6NI/MW1noa0HAodziX3Hj9mVCrb/rJyS8srq2v59cLG5tb2TnF3rymjRGDSwBGLRNtHkjDKSUNRxUg7FgSFPiMtf3Qz8VsPREga8Xs1jk3RANOA4qRMpJXvHJDpIZ+oFtpT5fVcQqr0A0EwtpJNU+hK5PQ07TqpD0O57Me9Yolu2JPAReJk5ESyFD3iq9uP8JSLjCDEnZcexYdTUSimJG0oKbSBIjPEID0jGUo5DIrp7emcIjo/RhEAnzuIJT9feERqGU49A3ycmct6biP95nUQFl1NeZwowvHsoyBhUEVwUhrsU0GwYmNDEBbU7ArxEJmWlKm2YEpw5k9eJM2TinNaOb87K9Wuszry4AcgjJwAWogVtQBw2AwSN4Bm/g3XqyXqwP63MWzVnZzD74A+v7B2K9pkU=</latexit>

❖ A bizarre heuristic! ❖ Works OK for GLMs ❖ Terrible for ANNs!

slide-17
SLIDE 17

17

ParameterServer for Distributed SGD

❖ Disadvantages of Model Averaging for distributed SGD: ❖ Poor convergence for non-convex/ANN models; leads to too many epochs and typically poor ML accuracy ❖ UDA merge step is choke point at scale (n in 100s); model

  • param. size can be huge (even GBs); wastes resources

❖ ParameterServer is a more flexible from-scratch design of an ML system specifically for distributed SGD: ❖ Break the synchronization barrier for merging: allow asynchronous updates from workers to master ❖ Flexible communication frequency: at mini-batch level too

https://www.cs.cmu.edu/~muli/file/parameter_server_osdi14.pdf

slide-18
SLIDE 18

18

ParameterServer for Distributed SGD

Worker 1 Worker 2 Worker n

PS 1 PS 2 PS 2 …

Multi-sever “master”; each server manages a part of W(t)

<latexit sha1_base64="jHixyQj+DLWH5ucHau8W5BCjQHQ=">AB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQ+0GXRjcsK9gFtLJPpB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8Wyura+sbm6Wt8vbO7t6+fXDYUlEiCW2SiEey42NFORO0qZnmtBNLikOf07Y/vsv9oRKxSLxqKcx9UI8FCxgBGsj9W27F2I98oO0nT2lVX2W9e2KU3NmQMvELUgFCjT69ldvEJEkpEITjpXquk6svRLzQinWbmXKBpjMsZD2jVU4JAqL50lz9CpUQYoiKR5QqOZ+nsjxaFS09A3k3lOtejl4n9eN9HBjZcyESeaCjI/FCQc6QjlNaABk5RoPjUE8lMVkRGWGKiTVlU4K7+OVl0jqvuRe1q4fLSv2qKMEx3ACVXDhGupwDw1oAoEJPMrvFmp9WK9Wx/z0RWr2DmCP7A+fwByPpOK</latexit>

Push / Pull when ready/needed No sync. for workers or servers Workers send gradients to master for updates at each mini-batch (or lower frequency)

r˜ L(W(t+1)

n

)

<latexit sha1_base64="GLCamUO1sNnU5MPUcSRaDlHVdg=">ACEHicbVC7SgNBFJ2NrxhfUubwSAmCGHXB1oGbSwsIpgHZGOYncwmQ2Znl5m7Qlj2E2z8FRsLRWwt7fwbJ49CEw9cOJxzL/fe40WCa7DtbyuzsLi0vJdza2tb2xu5bd36jqMFWU1GopQNT2imeCS1YCDYM1IMRJ4gjW8wdXIbzwpXko72AYsXZAepL7nBIwUid/6EriCYJd4KLkpsUF92AQN/zk0Z6nxThyCmlHVnq5At2R4DzxNnSgpoimon/+V2QxoHTAIVROuWY0fQTogCTgVLc26sWUTogPRYy1BJAqbyfihFB8YpYv9UJmSgMfq74mEBFoPA890jo7Vs95I/M9rxeBftBMuoxiYpJNFfiwhHiUDu5yxSiIoSGEKm5uxbRPFKFgMsyZEJzZl+dJ/bjsnJTPbk8LlctpHFm0h/ZRETnoHFXQNaqiGqLoET2jV/RmPVkv1rv1MWnNWNOZXfQH1ucPdTycNg=</latexit>

r˜ L(W(t−1)

2

)

<latexit sha1_base64="z+XBRFQZlyoDHOZkNVbaz0Q+wU=">ACEHicbVC7SgNBFJ31GeMramkzGMSkMOxGRcugjYVFBPOAbAyzk9lkyOzsMnNXCMt+go2/YmOhiK2lnX/j5Fo4oELh3Pu5d57vEhwDb9bS0sLi2vrGbWsusbm1vbuZ3dug5jRVmNhiJUTY9oJrhkNeAgWDNSjASeYA1vcDXyGw9MaR7KOxhGrB2QnuQ+pwSM1MkduZJ4gmAXuOiy5CbFBTcg0Pf8pJHeJwU4dop1zs5PJ2yR4DzxNnSvJoimon9+V2QxoHTAIVROuWY0fQTogCTgVLs26sWUTogPRYy1BJAqbyfihFB8apYv9UJmSgMfq74mEBFoPA890jo7Vs95I/M9rxeBftBMuoxiYpJNFfiwhHiUDu5yxSiIoSGEKm5uxbRPFKFgMsyaEJzZl+dJvVxyTkpnt6f5yuU0jgzaRweogBx0jiroGlVRDVH0iJ7RK3qznqwX6936mLQuWNOZPfQH1ucPHSb/A=</latexit>

r˜ L(W(t)

1 )

<latexit sha1_base64="OcO+0LVRY+mliWjDoWz5mG5oEXI=">ACDnicbVC7SgNBFJ31GeNr1dJmMASJuz6QMugjYVFBPOAbAyzk9lkyOzsMnNXCMt+gY2/YmOhiK21nX/j5Fo4oELh3Pu5d57/FhwDY7zbS0tr6yurec28ptb2zu79t5+Q0eJoqxOIxGplk80E1yOnAQrBUrRkJfsKY/vBr7zQemNI/kHYxi1glJX/KAUwJG6tpFTxJfEOwBFz2W3mS45IUEBn6QNrP7tATlrOuWu3bBqTgT4EXizkgBzVDr2l9eL6JyCRQbRu04MnZQo4FSwLO8lmsWEDkmftQ2VJGS6k07eyXDRKD0cRMqUBDxRf0+kJNR6FPqmc3yqnvfG4n9eO4HgopNyGSfAJ0uChKBIcLjbHCPK0ZBjAwhVHFzK6YDogFk2DehODOv7xIGscV96RydntaqF7O4sihQ3SESshF56iKrlEN1RFj+gZvaI368l6sd6tj2nrkjWbOUB/YH3+AChxm4k=</latexit>

❖ Model params may get out-of-sync or stale; but SGD turns out to be remarkably robust—multiple updates/epoch really helps ❖ Communication cost per epoch is higher (per mini-batch)

slide-19
SLIDE 19

19

Deep Learning Systems

❖ Offer 3 crucial new capabilities for DL users: ❖ High-level APIs to easily construct complex neural architectures; aka differentiable programming ❖ Automatic differentiation (“autodiff”) to compute ❖ Automatic backpropagation: chain rule to propagate gradients and update model params across ANN layers ❖ “Neural computational graphs” compiled down to low-level hardware-optimized physical matrix ops (CPU, GPU, etc.) ❖ In-built support for many variants of GD and SGD ❖ Wider tooling infra. for saving models, plotting loss, etc.

L(·)

<latexit sha1_base64="tNJfpjXH2DN9un7btQdKjtFqYNI=">AB73icbVDLSgNBEJz1GeMr6tHLYBDiJez6QI9BLx48RDAPSJYwOzubDJmdWd6hbDkJ7x4UMSrv+PNv3GS7ETCxqKqm6u4JEcAOu+0sLa+srq0XNoqbW9s7u6W9/aZRqasQZVQuh0QwSXrAEcBGsnmpE4EKwVDG8mfuJacOVfIBRwvyY9CWPOCVgpfZdpUtDBSe9UtmtulPgReLlpIxy1Hulr26oaBozCVQYzqem4CfEQ2cCjYudlPDEkKHpM86lkoSM+Nn03vH+NgqIY6UtiUBT9XfExmJjRnFge2MCQzMvDcR/M6KURXfsZlkgKTdLYoSgUGhSfP45BrRkGMLCFUc3srpgOiCQUbUdG4M2/vEiap1XvrHpxf16uXedxFNAhOkIV5KFLVEO3qI4aiCKBntErenMenRfn3fmYtS45+cwB+gPn8wc9NI90</latexit>

r˜ L(·)

<latexit sha1_base64="PY4Gjg8y3t/U8mGjWXSxhIXfGU=">ACAHicbVDLSsNAFJ3UV62vqgsXbgaLUDcl8YEui25cuKhgH9CEMplM2qGTSZi5EUrIxl9x40IRt36GO/G6WOhrQcuHM65l3v8RPBNdj2t1VYWl5ZXSulzY2t7Z3yrt7LR2nirImjUWsOj7RTHDJmsBsE6iGIl8wdr+8Gbstx+Z0jyWDzBKmBeRvuQhpwSM1CsfuJL4gmAXuAhYdpdXRrEcNIrV+yaPQFeJM6MVNAMjV75yw1imkZMAhVE65jJ+BlRAGnguUlN9UsIXRI+qxrqCQR0142eSDHx0YJcBgrUxLwRP09kZFI61Hkm86IwEDPe2PxP6+bQnjlZVwmKTBJp4vCVGCI8TgNHDFKIiRIYQqbm7FdEAUoWAyK5kQnPmXF0nrtOac1S7uzyv161kcRXSIjlAVOegS1dEtaqAmoihHz+gVvVlP1ov1bn1MWwvWbGYf/YH1+QMfT5Yb</latexit>
slide-20
SLIDE 20

20

PDF: https://www.morganclaypool.com/doi/10.2200/ S00895ED1V01Y201901DTM057 Hardcopy: https://www.morganclaypoolpublishers.com/ catalog_Orig/product_info.php?products_id=1366 Takeaway: Scalable, efficient, and usable systems for ML have become critical for unlocking the value of “Big Data” Advertisement: I will offer CSE 291/234 titled “Data Systems for Machine Learning” in Fall 2020 If you are interested in learning more about this topic, read my recent book “Data management ML Systems”:

Machine Learning Systems

slide-21
SLIDE 21

Switch to Krypton Slides