[WIP] Can we get this working on Replicate?

Post reserved for a future guide on this topic.

As suggested in Running on other cloud providers - #8 by Raj_Dhakad.

My response there:

We’d probably need to bypass a lot of the Replicate specific features, so, no cog file, and probably no typed arguments (we know all the types we take for callInputs, but the modelInputs are for the most part not touched at all and parsed straight to the appropriate diffusers pipeline) - instead we’ll just take a long JSON string with everything, which should be ok.

I’d love to do this but with current obligations I’m not sure when I’ll have a chance. If you or someone else would like to take this on, I’ll be around for guidance. Form a quick perusal of the docs, I think the rough steps would be:

Reference: Push a model to Replicate – Replicate

  1. predict.py something along the lines of:
from cog import BasePredictor, Path, Input
from app import init, inference
import json

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        init()

    def predict(self,
            inputs: str = Input(description="docker-diffusers-api '{ callInputs: {}, modelIputs: {}' JSON string")
    ) -> str:
        """Run a single prediction on the model"""
        output = inference(json.loads(inputs))
        return output
  1. cog.yml
build:
  # we don't use cog for the build, hope that's ok `:)
predict: "predict.py:Predictor"

But I don’t have any prior experience with Replicate :sweat_smile:

Anyway, let’s see what happens. Regardless, I’ll try get this working when I have a chance, but that will probably only be next month at the earliest :sweat_smile:

1 Like

I also don’t have much experience with Replicate.

The Predictor class seems simple enough like the app.py file in the docker diffuser right?

But not sure how to bypass cog. We add dependencies using cog.yml and then build a dockerfile using cog run python. Which is not working for the dependencies we are using in docker diffuser API.

Do you think just using our dockerfile and adding a call to predictor.py will work? Or a call to cog.yml with just predict definition? I will try both though just asking for your recommendation :sweat_smile:

1 Like

Yeah, exactly, I was thinking / hoping we could skip using cog to do the build, do a regular docker build, and then docker push r8.im/<your-username>/<your-model-name>. But totally untested :slight_smile:

And yeah, Predictor class functions very similar to app.py which is why it’s so easy to wrap. The big difference is usually they want us to define all our inputs with types, but we’d prefer to get it all as a single JSON string (we could however, if we wanted to, separate out the callInputs, which might be a nice-to-have after everything else works).

But yeah unfortunately without any real practical experience with the replicate tools, and just a light perusal through the docs, this is all speculation at this point.

1 Like