OpenStack and Ceph at scale (45 minutes session) | Breakout session
Presents results of testing OSP 10 (OpenStack Newton) and RHCS 2.0 (Ceph Jewel) at largest scale ever attempted inside Red Hat as of end of 2016, 1000 HDDs across 29 servers and 20 compute nodes. Includes experiences with design configuration and build-out of this cluster, and performance results while stressing the system with some typical operational events such as simulated (and real) hardware failu
Ben England
senior principal engineer Red Hat
I have worked on distributed storage performance for over 2 decades, contributing to Ceph, Gluster, EMC Atmos, and IBRIX Fusion (now HP), and was a developer for a decade before that. More recently I have authored the "small-image-urlfile" benchmark and contributed to fio benchmark, and have built the largest Red-Hat-internal OpenStack-Ceph cluster.
Jared King
Cloud Operations Engineer Cisco
We have been building Cisco's Cloud for 2.5 years. We currently have over 100k cores and are continuing to grow. I am a founding member of the Cloud Operations Engineering Team. We use Red Hat OSP and Ceph for our storage.
Room 157A
Thursday, 4th May, 15:30 - 16:15