Distributed CPU/GPU Rendering
SquidNet is a network render farm manager that supports both cloud and local CPU/GPU rendering. Cloud (outside the local farm) rendering involves submitting jobs remotely via SquidNet’s Cloud User Interface (CUI). Local farm rendering involves submitting jobs inside the local render farm network using SquidNet’s Local User Interface (LUI). Built-in support for Intel/AMD and Nvidia (CUDA)/AMD (OpenCL).
SquidNet’s cloud rendering workflow allows for direct submission of CPU/GPU jobs from outside the local render farm network. Job submissions can be submitted from a desktop application or from a scriptable command line interface. All communications between remote and local connection are securely protected with OpenSSL AES 256 bit encryption.
SquidNet’s local rendering workflow allows for direct submission of CPU/GPU render jobs to a local render farm. Jobs can be submitted using the local job submission interface or from a scriptable command line interface. Standard job management operations (suspend, resume, cancel, etc…) are available from both the GUI and command line interfaces.
SquidNet’s distributed computing workflow can be customized to distribute process-intensive tasks across any number of networked servers. Custom task submission templates can be created to handle any set of application processing parameters. Any application that supports an API/SDK or command-line interface can be integrated into SquidNet’s workflow engine. For additional information on how to integrate your application, send us an email.
SquidNet’s GPU interface provides an ideal candidate for machine learning distributed operations. The front-end local and cloud-based user interfaces provide all the user input data to the back-end GPU processing nodes. When processing is completed, SquidNet bundles up the output content and downloads it to the user’s desktop automatically. Send us an email if you’re interested in becoming a development partner for our machine learning interface design.
SquidNet’s extensible interface can support any compute-intensive task that can be distributed across hundreds of parallel processing nodes. These tasks can include: CPU/GPU animation rendering, virtual reality scene processing, data mining, analytical processing and scientific experimentation. Custom job templates provide users with the flexibility to manage their custom applications across their computer network. Support for any application that provides an sdk/api or command line interface.
Support for Windows (7/8/10), Mac OS (Sierra, etc…) and Linux (Ubuntu, Red Hat, Fedora, etc…) Operating Systems. Within each operating system, all graphical user and command line interfaces are identical. For each platform, there’s just a single installer for the back-end server and a single installer for the front-end Cloud Interface. See download section.