Community Spotlight: Cupixel

Making the experience of art creation enjoyable and accessible to everyone.

I’ll be honest with you—I’m horrible at all things art. I was the kid who had to stay after class in 7th grade art because I could not, for the life of me, figure out shading. After years of half-hearted, failed attempts at scratching something decent onto a sketch pad, I resigned myself to the fact that I just wasn’t meant to be an artist.

One of the things I find most fascinating about technology is the ways in which it can clear pathways I would’ve imagined blocked. I’ve found that to be true as I’ve learned more and more about machine learning and mobile development, certainly.

And Cupixel is, for me, an incredible representation of that. It’s a suite of tools and a mobile experience that allow me to imagine I might be able to make good art. I might be able to experience something I thought wasn’t meant for me.

I’ll let Yuval and the Cupixel team take it from here. Here’s our interview:

What’s the background of Cupixel, and how does the mobile experience work within the product?

We founded Cupixel with a very simple vision — to enable anyone to experience art creation.

Mobile devices have a significant part in this vision (together with the special art tools we developed):

  1. Choosing the artwork you’d like to paint — we would like to make this process as personal as possible and let people choose artwork based on any image (licensed), any style, any color, and any level of difficulty (2 hours painting or 100 hours)
  2. Painting process — the Cupixel app works as an “artist assistant” —teaching the user tricks of the trade like how to blend colors, sketch an accurate outline, and much more
  3. Precision — One of the most important parts in the flow of the experience is to enable users to precisely paint objects and images that are hard to paint even for pros. We do that by using augmented reality.

What’s does your tech stack look like, and what tools have you found helpful?

There are 3 main products that work together to create the Cupixel experience: the iOS App, an API server, and a dedicated machine learning server.

Our iOS app was developed using Swift from the start. As Swift evolved, so did our app. Usually we upgraded to the latest Swift version once it was released, so now we’re on 4.2, having started on 3.0. Some of these upgrades were quite painful, but were worth it nevertheless.

We also use Objective-C and C/C++ for more challenging tasks like handling large images and our augmented reality features (in combination with OpenCV). On our experimental builds we also use OpenEars for voice recognition and for image segmentation.

Tools we like working with:

  1. Fastlane with Fabric Beta to automate builds and distribute to testers
  2. Crashlytics to track production issues
  3. Swiftlint to implement basic code standards

To train our machine learning models, we use a dedicated GPU server on Google, working mainly with Python and TensorFlow.

Our API server (hosted @ AWS) is built with Node.js using Restify, MongoDB (hosted @ mLab/Atlas), and Redis (hosted @ redislabs).

What has been the most challenging part of developing the Cupixel app?

Constantly optimizing our app in terms of memory/CPU in order to provide our users a smooth experience. And this is the challenging part as our app has some complex animations and is graphically intensive.

Working with large images in a fast manner is not trivial even on modern devices. To interact (pan/zoom) with large—and sometimes very large (20MP+)—images on screen smoothly, we had to employ the “Tiled Layer” technique, where the image is split into small tiles and only the relevant tiles that are on display are loaded.

On top of that, we have a feature in the app where the user can tap on the image and isolate specific colors (by highlighting these pixels and dimming the others), or apply canvas texture on the image. To do that we had to go “low-level” and carefully implement these features (using a modified version of PhotoScrollerNetwork).

What excites you the most about machine learning?

The field that attracts us most is deep learning in computer vision. As a startup that wants to enable anyone to experience art creation, the possibilities that deep learning opens are exciting. We’re constantly seeking to replace “traditional” image processing with AI. Obviously style transfer is one feature, but we’re also looking into image segmentation, object detection, recoloring, and others in order to offer more personalized experiences to our users.

Do you have any advice for other developers who are looking to get started with machine learning?

Starting without any academic background in ML can be somewhat intimidating at first. The terminology is quite rich, there are lots of frameworks and tools, etc. So in order to get more comfortable, an online course can help quite a bit. There are plenty of courses available there are plenty of courses available (Coursera, EdX, Datacamp, etc.) that are on an introductory level.

Then there’s Medium and GitHub, where one can find lots of articles and code samples for cool projects. Many of them come with clear instructions and even pre-built models, so no training is needed.

In case model training is required, setting up a dev environment can be very time consuming. In order to simplify the process, search for an image that already comes with the software you need. Nvidia has its own “NVIDIA GPU Cloud Image” that can be found on various cloud providers.

What does the future hold for the Cupixel app?

We’re building a brand that will be associated with creativity. The Cupixel app will be your personal creative app that will open doors to things you could have never done before:

  1. Direct communication between professional artists and users while painting
  2. A platform to show your artwork and teach others
  3. A platform to explore new artwork and new styles that can be made by anyone
  4. A personalized platform to search for new art

Editor’s Note: Check out some of our other Community Spotlights—Detecting plant disease, and instant lightsaber, and plenty more.

Avatar photo


Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *