Output formatting in Python is passé? — Think again!

Color up your notebooks and showcase AI the better way

Everyone knows that hands down notebooks are the way to go when you need to write code — be it short, be it long (even if they don’t admit it 😉) !

Today, I will be empowering you with a set of libraries the can color up your notebooks and help power up your code demonstrations. So let’s set them up one by one and dive into their implementations.

TQDM

Chances are you’re already using this library as it’s shipped along with Python. If you aren’t, there’s no better time to start than now. The best part is that nothing additional has to be installed. To use TQDM in a simple for loop, fire up the notebook and write:

Why did I use this ridiculously huge loop? So that you can see the tqdm in action before it exits the loop. For all those visual peeps out there, here’s a picture:

Cool right? Let’s take this to deep learning with PyTorch:

for epoch in range(epochs):
  loss_collector=0
  total_batches=(data_loader_train.dataset)//batch_size
  
  for batch_idx,(inputs,labels) in tqdm(enumerate(data_loader_train)):
        inputs,labels = inputs.cuda(),labels.cuda()

        optimizer.zero_grad()
        outputs=model(inputs)
        loss=criterion(outputs,labels)
        loss.backward()
        optimizer.step()
        loss_collector+=(float(loss)/total_batches)
  print("Epoch {} Loss {:.4f}".format(epoch,loss_collector))
  
        

You’d be surprised to know that this code doesn’t work as expected! In fact, it shows one of the biggest shortcomings of tqdm — an unknown end. Since a data-loader is passed in the loop and information from it is being extracted, tqdm has no means to understand when the loop will end. Thus, it’s impossible for tqdm to draw a horizontal bar predicting the same.

What can we do to get rid of this? Simple — We tell tqdm how long our loop is supposed to run 🙂

for epoch in range(epochs):
  
	loss_collector=0
  total_batches=(data_loader_train.dataset)//batch_size
  
	with tqdm(total=total_batches) as pbar:
		
		for batch_idx,(inputs,labels) in tqdm(enumerate(data_loader_train)):
			inputs,labels = inputs.cuda(),labels.cuda()

			optimizer.zero_grad()
			outputs=model(inputs)
			loss=criterion(outputs,labels)
			loss.backward()
			optimizer.step()
			loss_collector+=(float(loss)/total_batches)
			pbar.update(1)
					
  print("Epoch {} Loss {:.4f}".format(epoch,loss_collector))
  

Thus, TQDM can make our code highly presentable, and our epoch wait-times much more bearable 😛

Termcolor

Termcolor is another library shipped with Python that allows us to color the output of our code. While TQDM enhances presentation, termcolor is focused on improving output readability. Tired of squinting throughout the screen for that one loss spike? Termcolor can handle it all! Let’s get started with the basics of termcolor —

That’s it ! Let’s see the output coloring:

With this, we can easily differentiate spikes in loss functions. We can also put up losses with different colors to help us observe them at a glance. Let’s see an example here —

loss_assembler=[]
for epoch in range(epochs):
  
  loss_collector=0
  total_batches=(data_loader_train.dataset)//batch_size
  
    with tqdm(total=total_batches) as pbar:

        for batch_idx,(inputs,labels) in tqdm(enumerate(data_loader_train)):
            inputs,labels = inputs.cuda(),labels.cuda()

            optimizer.zero_grad()
            outputs=model(inputs)
            loss=criterion(outputs,labels)
            loss.backward()
            optimizer.step()
            loss_collector+=(float(loss)/total_batches)
            pbar.update(1)
    
    
    if (all(x < loss_collector for x in loss_assembler)):
        print(colored("Epoch {} Loss{:.4f}".format(i,loss),'green'))
    else:
        print(colored("Epoch {} Loss{:.4f}".format(i,loss),'red'))
    loss_assembler.append(loss_collector)
    

Let’s check out the output for this code —

Voila! The red color helps us easily identify the loss value, which is out of line.

Hack!

Want to go beyond changing mere text colors? Let’s dive into a neat hack to get different color themes in basic print statements with a built in ‘escape sequence’ — ‘x1b[0m’

Let’s see what this gives us as output!

As you might have realized, it’s pretty difficult to guess the exact sequence that gives you a specific color :-P.

So let’s print as many colors as we can and create a reference chart!

###################################
#Borrowed from stack overflow:
#https://stackoverflow.com/a/21786287/11714748
####################################

def print_format_table():
    for style in range(8):
        for fg in range(30,38):
            s1 = ''
            for bg in range(40,48):
                format = ';'.join([str(style), str(fg), str(bg)])
                s1 += 'x1b[%sm %s x1b[0m' % (format, format)
            print(s1)
        print('n')

print_format_table()

More than Coloring?

Of course, coloring is never enough if you have a more impressive or complicated output in mind! The Prompt Toolkit is something that lets you format outputs by HTML formatting — absolutely any string you pass to it. No need to resort to notebooks every time you have to include HTML formatting!

Let’s check out its installation —

For Conda users who want this to be installed on their very own virtual Conda environment, use this —

Now that we’ve covered installation, let’s take a closer look at the usage of the Prompt Toolkit —

Let’s check out the output for this code!

Still need more? Check out the following Markdown tips here to build a better notebook —

Whats Next?

Don’t think for an instant that this is all the formatting you can do in a Python notebook or shell! The freedom of using HTML and CSS makes unending formatting possibilities available to the programmer. Give them a shot and make your code craftier than ever! Here’s something to get you started —

Do let me know if you need a hand!

Hmrishav Bandyopadhyay is a 2nd year Undergraduate at the Electronics and Telecommunication department of Jadavpur University, India. His interests lie in Deep Learning, Computer Vision, and Image Processing. He can be reached at — [email protected] || https://hmrishavbandy.github.io

Avatar photo

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *