search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
JELLYFISH PICTURES CASE STUDY


Driving Operational Efficiency with Innovation


A related issue was the high cost of render workloads in more expensive cloud regions. The rendering work for each project requires hundreds or thousands of GPUs and CPUs. The cost of the compute resources available in different cloud regions can vary significantly depending on how busy the resources in the region are as well as the expense of the services in that region. Rendering in London or Los Angeles is significantly more expensive than some of the cloud regions located further from massive city centres. Cloud-bursting render files in the lower cost regions made sense but adding the cost and time of moving files across Azure locations did not. Jellyfish sought out technologies that could overcome the concept of ‘data gravity’ and would be able to move the data to the optimal compute and remote artists efficiently. This was instead of moving the compute and the artists to where the data was.


“We saw the opportunity to drive operational efficiencies by first leveraging available compute resources which we already own, and then bursting to not just the cloud for scale – but - to the most cost-effective cloud region for cost efficiencies. We needed a solution to simply and quickly move data from one location to another, even if they were literally half a world apart. We also needed a way to control high render costs without adding more time to an already intensive workflow.”


Jeremy Smith CTO, Jellyfish Pictures


The Solution: Massive File Movement, Massively Simplified


Jellyfish partnered with Azure and Hammerspace to transparently orchestrate content to a globally distributed and growing workforce and enable cost-effective renders in a choice of geographic cloud regions. The solution was seamless to the artists’ workflows as Hammerspace is integrated with Jellyfish’s workflow management tool, Autodesk ShotGrid. As a project is triggered within ShotGrid to move content files, Hammerspace makes the content movement easy leveraging Terraform in Azure to make hundreds of millions of files globally accessible for read/write access in multiple locations, in minutes.


When Jellyfish leveraged Hammerspace to build a Global Data Environment, they deployed on-premises Anvil metadata servers, each with 4 DSX (data services) nodes. The nodes include high-speed NVMe SSDs connected to Hammerspace instances in multiple Azure regions. The Anvils work by replicating metadata between points in a bi-directional replication configuration. All sites are active, with all artists able to perform high-performance read/write on the same shared dataset.


Jellyfish also improved disaster recovery planning using Hammerspace’s replication process to ensure the high availability of assets for ongoing work in the event of an outage.


The Outcome: Local Access to Remote Content, At Last!


By dynamically combining Hammerspace on- premises resources and cloud instances with low-cost cloud regions, Jellyfish optimized its business for fast growth, scalability, and revenue.


Artists can log in from anywhere via the Hammerspace Global Data Environment, making it possible for any user, any application, and any location to share the same data set or content repository. Files appear as local files to users, and artists access their work without making copies.


Dynamic workloads are always available, so staff in different time zones can work on the same projects without disrupting each other. Artists can also apply custom metadata tags to files through Autodesk Shotgrid or directly through the Hammerspace API, simplifying downstream orchestration and processing.


The Jellyfish workloads are also quite demanding from a performance perspective requiring hundreds of simultaneous render jobs accessing Hammerspace from Windows workstations over the SMB protocol. In addition to IOPS and throughput performance, Hammerspace has made Jellyfish think about performance in a new way, enabling the locality of data at a file-granular level so that users have the files they need when they need them.


“Hammerspace is a core part of the Jellyfish strategic vision, helping us expand our global workforce in a highly competitive industry, increasing our productivity to meet and exceed our clients’ expectations while greatly reducing costs on multiple levels. The intelligence of the Hammerspace solution gives us far greater control of our data and provides quantifiable and tangible value to our business.”


Jeremy Smith CTO, Jellyfish Pictures


hammerspace.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122