Author: Saim Khalid

  • Metaprogramming with Metaclasses

    In Python, Metaprogramming refers to the practice of writing code that has knowledge of itself and can be manipulated. The metaclasses are a powerful tool for metaprogramming in Python, allowing you to customize how classes are created and behave. Using metaclasses, you can create more flexible and efficient programs through dynamic code generation and reflection.

    Metaprogramming in Python involves techniques such as decorators and metaclasses. In this tutorial, you will learn about metaprogramming with metaclasses by exploring dynamic code generation and reflection.

    Defining Metaclasses

    Metaprogramming with metaclasses in Python offer advanced features of enabling advanced capabilities to your program. One such feature is the __prepare__() method, which allows customization of the namespace where a class body will be executed.

    This method is called before any class body code is executed, providing a way to initialize the class namespace with additional attributes or methods. The __prepare__() method should be implemented as a classmethod.

    Example

    Here is an example of creating metaclass with advanced features using the __prepare__() method.

    classMyMetaClass(type):@classmethoddef__prepare__(cls, name, bases,**kwargs):print(f'Preparing namespace for {name}')# Customize the namespace preparation here
    
      custom_namespace =super().__prepare__(name, bases,**kwargs)
      custom_namespace['CONSTANT_VALUE']=100return custom_namespace
    # Define a class using the custom metaclassclassMyClass(metaclass=MyMetaClass):def__init__(self, value):
      self.value = value
    
    defdisplay(self):print(f"Value: {self.value}, Constant: {self.__class__.CONSTANT_VALUE}")# Instantiate the class obj = MyClass(42) obj.display()

    Output

    While executing above code, you will get the following results −

    Preparing namespace for MyClass
    Value: 42, Constant: 100
    

    Dynamic Code Generation with Metaclasses

    Metaprogramming with metaclasses enables the creation or modification of code during runtime.

    Example

    This example demonstrates how metaclasses in Python metaprogramming can be used for dynamic code generation.

    classMyMeta(type):def__new__(cls, name, bases, attrs):print(f"Defining class: {name}")# Dynamic attribute to the class
    
      attrs['dynamic_attribute']='Added dynamically'# Dynamic method to the classdefdynamic_method(self):returnf"This is a dynamically added method for {name}"
        
      attrs['dynamic_method']= dynamic_method
        
      returnsuper().__new__(cls, name, bases, attrs)# Define a class using the metaclassclassMyClass(metaclass=MyMeta):pass
    obj = MyClass()print(obj.dynamic_attribute)print(obj.dynamic_method())

    Output

    On executing above code, you will get the following results −

    Defining class: MyClass
    Added dynamically
    This is a dynamically added method for MyClass
    

    Reflection and Metaprogramming

    Metaprogramming with metaclasses often involves reflection, allowing for introspection and modification of class attributes and methods at runtime.

    Example

    In this example, the MyMeta metaclass inspects and prints the attributes of the MyClass during its creation, demonstrating how metaclasses can introspect and modify class definitions dynamically.

    classMyMeta(type):def__new__(cls, name, bases, dct):# Inspect class attributes and print themprint(f"Class attributes for {name}: {dct}")returnsuper().__new__(cls, name, bases, dct)classMyClass(metaclass=MyMeta):
       data ="example"

    Output

    On executing above code, you will get the following results −

    Class attributes for MyClass: {'__module__': '__main__', '__qualname__': 'MyClass', 'data': 'example'}
    
  •  Metaclasses

    Metaclasses are a powerful feature in Python that allow you to customize class creation. By using metaclasses, you can add specific behaviors, attributes, and methods to classes, and allowing you to create more flexible, efficient programs. This classes provides the ability to work with metaprogramming in Python.

    Metaclasses are an OOP concept present in all python code by default. Python provides the functionality to create custom metaclasses by using the keyword type. Type is a metaclass whose instances are classes. Any class created in python is an instance of type metaclass.

    Creating Metaclasses in Python

    A metaclass is a class of a class that defines how a class behaves. Every class in Python is an instance of its metaclass. By default, Python uses type() function to construct the metaclasses. However, you can define your own metaclass to customize class creation and behavior.

    When defining a class, if no base classes or metaclass are explicitly specified, then Python uses type() to construct the class. Then its body is executed in a new namespace, resulting class name is locally linked to the output of type(name, bases, namespace).

    Example

    Let’s observe the result of creating a class object without specifying specific bases or a metaclass

    classDemo:pass
       
    obj = Demo()print(obj)

    Output

    On executing the above program, you will get the following results −

    <__main__.Demo object at 0x7fe78f43fe80>
    

    This example demonstrates the basics of metaprogramming in Python using metaclasses. The above output indicates that obj is an instance of the Demo class, residing in memory location 0x7fe78f43fe80. This is the default behavior of the Python metaclass, allowing us to easily inspect the details of the class.

    Creating Metaclasses Dynamically

    The type() function in Python can be used to create classes metaclasses dynamically.

    Example

    In this example, DemoClass will created using type() function, and an instance of this class is also created and displayed.

    # Creating a class dynamically using type()
    DemoClass =type('DemoClass',(),{})
    obj = DemoClass()print(obj)

    Output

    Upon executing the above program, you will get the following results −

    <__main__.DemoClass object at 0x7f9ff6af3ee0>
    

    Example

    Here is another example of creating a Metaclass with inheritance which can be done by inheriting one from another class using type() function.

    classDemo:pass
       
    Demo2 =type('Demo2',(Demo,),dict(attribute=10))
    obj = Demo2()print(obj.attribute)print(obj.__class__)print(obj.__class__.__bases__)

    Output

    Following is the output −

    10
    <class '__main__.Demo2'>
    (<class '__main__.Demo'>,)
    

    Customizing Metaclass Creation

    In Python, you can customize how classes are created and initialized by defining your own metaclass. This customization is useful for various metaprogramming tasks, such as adding specific behavior to all instances of a class or enforcing certain patterns across multiple classes.

    Customizing the classes can be done by overriding methods in the metaclass, specifically __new__ and __init__.

    Example

    Let’s see the example of demonstrating how we can customize class creation using the __new__ method of a metaclass in python.

    # Define a custom metaclassclassMyMetaClass(type):def__new__(cls, name, bases, dct):
    
      dct['version']=1.0# Modify the class name
      name ='Custom'+ name
        
      returnsuper().__new__(cls, name, bases, dct)# MetaClass acts as a template for the custom metaclassclassDemo(metaclass=MyMetaClass):pass# Instantiate the class
    obj = Demo()# Print the class name and version attributeprint("Class Name:",type(obj).__name__)print("Version:", obj.version)

    Output

    While executing above code, you will get the following results −

    Class Name: CustomDemo
    Version: 1.0
    

    Example

    Here is another example that demonstrates how to customize the metaclass using the __init__ in Python.

    # Define a custom metaclassclass2yCreating MetaClass(type):def__init__(cls, name, bases, dct):print('Initializing class', name)# Add a class-level attribute
    
      cls.version=10super().__init__(name, bases, dct)# Define a class using the custom metaclassclassMyClass(metaclass=MyMetaClass):def__init__(self, value):
      self.value = value
    
    defdisplay(self):print(f"Value: {self.value}, Version: {self.__class__.version}")# Instantiate the class and demonstrate its usage obj = MyClass(42) obj.display()

    Output

    While executing above code, you will get the following results −

    Initializing class MyClass
    Value: 42, Version: 10
  •  Memory Management

    In Python, memory management is automatic, it involves handling a private heap that contains all Python objects and data structures. The Python memory manager internally ensures the efficient allocation and deallocation of this memory. This tutorial will explore Python’s memory management mechanisms, including garbage collection, reference counting, and how variables are stored on the stack and heap.

    Memory Management Components

    Python’s memory management components provides efficient and effective utilization of memory resources throughout the execution of Python programs. Python has three memory management components −

    • Private Heap: Acts as the main storage for all Python objects and data. It is managed internally by the Python memory manager.
    • Raw Memory Allocator: This low-level component directly interacts with the operating system to reserve memory space in Python’s private heap. It ensures there’s enough room for Python’s data structures and objects.
    • Object-Specific Allocators: On top of the raw memory allocator, several object-specific allocators manage memory for different types of objects, such as integers, strings, tuples, and dictionaries.

    Memory Allocation in Python

    Python manages memory allocation in two primary ways − Stack and Heap.

    Stack − Static Memory Allocation

    In static memory allocation, memory is allocated at compile time and stored in the stack. This is typical for function call stacks and variable references. The stack is a region of memory used for storing local variables and function call information. It operates on a Last-In-First-Out (LIFO) basis, where the most recently added item is the first to be removed.

    The stack is generally used for variables of primitive data types, such as numbers, booleans, and characters. These variables have a fixed memory size, which is known at compile-time.

    Example

    Let us look at an example to illustrate how variables of primitive types are stored on the stack. In the above example, variables named x, y, and z are local variables within the function named example_function(). They are stored on the stack, and when the function execution completes, they are automatically removed from the stack.

    defmy_function():
       x =5
       y =True
       z ='Hello'return x, y, z
    
    print(my_function())print(x, y, z)

    On executing the above program, you will get the following output −

    (5, True, 'Hello')
    Traceback (most recent call last):
      File "/home/cg/root/71937/main.py", line 8, in <module>
    
    print(x, y, z)
    NameError: name 'x' is not defined

    Heap − Dynamic Memory Allocation

    Dynamic memory allocation occurs at runtime for objects and data structures of non-primitive types. The actual data of these objects is stored in the heap, while references to them are stored on the stack.

    Example

    Let’s observe an example for creating a list dynamically allocates memory in the heap.

    a =[0]*10print(a)

    Output

    On executing the above program, you will get the following results −

    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    

    Garbage Collection in Python

    Garbage Collection in Python is the process of automatically freeing up memory that is no longer in use by objects, making it available for other objects. Pythons garbage collector runs during program execution and activates when an object’s reference count drops to zero.

    Reference Counting

    Python’s primary garbage collection mechanism is reference counting. Every object in Python maintains a reference count that tracks how many aliases (or references) point to it. When an object’s reference count drops to zero, the garbage collector deallocates the object.

    Working of the reference counting as follows −

    • Increasing Reference Count− It happens when a new reference to an object is created, the reference count increases.
    • Decreasing Reference Count− When a reference to an object is removed or goes out of scope, the reference count decreases.

    Example

    Here is an example that demonstrates working of reference counting in Python.

    import sys
    
    # Create a string object
    name ="Tutorialspoint"print("Initial reference count:", sys.getrefcount(name))# Assign the same string to another variable
    other_name ="Tutorialspoint"print("Reference count after assignment:", sys.getrefcount(name))# Concatenate the string with another string
    string_sum = name +' Python'print("Reference count after concatenation:", sys.getrefcount(name))# Put the name inside a list multiple times
    list_of_names =[name, name, name]print("Reference count after creating a list with 'name' 3 times:", sys.getrefcount(name))# Deleting one more reference to 'name'del other_name
    print("Reference count after deleting 'other_name':", sys.getrefcount(name))# Deleting the list referencedel list_of_names
    print("Reference count after deleting the list:", sys.getrefcount(name))

    Output

    On executing the above program, you will get the following results −

    Initial reference count: 4
    Reference count after assignment: 5
    Reference count after concatenation: 5
    Reference count after creating a list with 'name' 3 times: 8
    Reference count after deleting 'other_name': 7
    Reference count after deleting the list: 4
    
  • Object Internals

    The internals of Python objects provides deeper insights into how Python manages and manipulates data. This knowledge is essential for writing efficient, optimized code and for effective debugging.

    Whether we’re handling immutable or mutable objects by managing memory with reference counting and garbage collection or leveraging special methods and slots, grasping these concepts is fundamental to mastering Python programming.

    Understanding Python’s object internals is crucial for optimizing code and debugging. Following is an overview of the key aspects of Python object internals −

    Object Structure

    In Python every object is a complex data structure that encapsulates various pieces of information. Understanding the object structure helps developers to grasp how Python manages memory and handles data.

    Each python object mainly consists of two parts as mentioned below −

    • Object Header: This is a crucial part of every Python object that contains essential information for the Python interpreter to manage the object effectively. It typically consists of two main components namely Reference count and Type Pointer.
    • Object Data: This data is the actual data contained within the object which can differ based on the object’s type. For example an integer contains its numeric value while a list contains references to its elements.
    • Object Identity
      Object Identity is the identity of an object which is an unique integer that represents its memory address. It remains constant during the object’s lifetime. Every object in Python has a unique identifier obtained using the id() function.
      Example
      Following is the example code of getting the Object Identity −


      a = “Tutorialspoint” print(id(a)) # Example of getting the id of an string object
      On executing the above code we will get the following output −
      2366993878000
      Note: The memory address will change on every execution of the code.
      Object Type
      Object Type is the type of an object defines the operations that can be performed on it. For example integers, strings and lists have distinct types. It is defined by its class and can be accessed using the type() function.
      Example
      Here is the example of it −

      a = “Tutorialspoint” print(type(a))
      On executing the above code we will get the following output −
      <class ‘str’>
      Object Value
      Object Value of an object is the actual data it holds. This can be a primitive value like an integer or string, or it can be more complex data structures like listsor dictionaries.
      Example
      Following is the example of the object value −


      b = “Welcome to Tutorialspoint” print(b)
      On executing the above code we will get the following output −
      Welcome to Tutorialspoint
      Memory Management
      Memory management in Python is a critical aspect of the language’s design by ensuring efficient use of resources while handling object lifetimes and garbage collection. Here are the key components of memory management in Python −
      Reference Counting: Python uses reference counting to manage memory. Each object keeps track of how many references point to it. When this count drops to zero then the memory can be freed.
      Garbage Collection: In addition to reference counting the Python employs a garbage collector to identify and clean up reference cycles.
      Example
      Following is the example of the getting the reference counting in memory management −

      import sys c = [1, 2, 3] print(sys.getrefcount(c)) # Shows the reference count
      On executing the above code we will get the following output −
      2
      Attributes and Methods
      Python objects can have attributes and methods which are accessed using dot notation. In which Attributes store data while methods define the behavior.
      Example


      class MyClass: def __init__(self, value): self.value = value def display(self): print(self.value) obj = MyClass(10) obj.display()
      On executing the above code we will get the following output −
      10
      Finally, understanding Python’s object internals helps optimize performance and debug effectively. By grasping how objects are structured and managed in memory where developers can make informed decisions when writing Python code
  • Higher Order Functions

    Higher-order functions in Python allows you to manipulate functions for increasing the flexibility and re-usability of your code. You can create higher-order functions using nested scopes or callable objects.

    Additionally, the functools module provides utilities for working with higher-order functions, making it easier to create decorators and other function-manipulating constructs. This tutorial will explore the concept of higher-order functions in Python and demonstrate how to create them.

    What is a Higher-Order Function?

    A higher-order function is a function that either, takes one or more functions as arguments or returns a function as its result. Below you can observe the some of the properties of the higher-order function in Python −

    • A function can be stored in a variable.
    • A function can be passed as a parameter to another function.
    • A high order functions can be stored in the form of lists, hash tables, etc.
    • Function can be returned from a function.

    To create higher-order function in Python you can use nested scopes or callable objects. Below we will discuss about them briefly.

    Creating Higher Order Function with Nested Scopes

    One way to defining a higher-order function in Python is by using nested scopes. This involves defining a function within another function and returns the inner function.

    Example

    Let’s observe following example for creating a higher order function in Python. In this example, the multiplier function takes one argument, a, and returns another function multiply, which calculates the value a * b

    defmultiplier(a):# Nested function with second number   defmultiply(b):# Multiplication of two numbers  return a * b 
       return multiply   
    
    # Assigning nested multiply function to a variable  
    multiply_second_number = multiplier(5)# Using variable as high order function  
    Result = multiply_second_number(10)# Printing result  print("Multiplication of Two numbers is: ", Result)

    Output

    On executing the above program, you will get the following results −

    Multiplication of Two numbers is:  50
    

    Creating Higher-Order Functions with Callable Objects

    Another approach to create higher-order functions is by using callable objects. This involves defining a class with a __call__ method.

    Example

    Here is the another approach to creating higher-order functions is using callable objects.

    classMultiplier:def__init__(self, factor):
    
      self.factor = factor
    def__call__(self, x):return self.factor * x # Create an instance of the Multiplier class multiply_second_number = Multiplier(2)# Call the Multiplier object to computes factor * x Result = multiply_second_number(100)# Printing result print("Multiplication of Two numbers is: ", Result)

    Output

    On executing the above program, you will get the following results −

    Multiplication of Two numbers is:  200
    

    Higher-order functions with the ‘functools’ Module

    The functools module provides higher-order functions that act on or return other functions. Any callable object can be treated as a function for the purposes of this module.

    Working with Higher-order functions using the wraps()

    In this example, my_decorator is a higher-order function that modifies the behavior of invite function using the functools.wraps() function.

    import functools
    
    defmy_decorator(f):@functools.wraps(f)defwrapper(*args,**kwargs):print("Calling", f.__name__)return f(*args,**kwargs)return wrapper
    
    @my_decoratordefinvite(name):print(f"Welcome to, {name}!")
    
    invite("Tutorialspoint")

    Output

    On executing the above program, you will get the following results −

    Calling invite
    Welcome to, Tutorialspoint!
    

    Working with Higher-order functions using the partial()

    The partial() function of the functools module is used to create a callable ‘partial’ object. This object itself behaves like a function. The partial() function receives another function as argument and freezes some portion of a functions arguments resulting in a new object with a simplified signature.

    Example

    In following example, a user defined function myfunction() is used as argument to a partial function by setting default value on one of the arguments of original function.

    import functools
    defmyfunction(a,b):return a*b
    
    partfunction = functools.partial(myfunction,b =10)print(partfunction(10))

    Output

    On executing the above program, you will get the following results −

    100
    

    Working with Higher-order functions using the reduce()

    Similar to the above approach the functools module provides the reduce()function, that receives two arguments, a function and an iterable. And, it returns a single value. The argument function is applied cumulatively two arguments in the list from left to right. Result of the function in first call becomes first argument and third item in list becomes second. This is repeated till list is exhausted.

    Example

    import functools
    defmult(x,y):return x*y
    
    # Define a number to calculate factorial
    n =4
    num = functools.reduce(mult,range(1, n+1))print(f'Factorial of {n}: ',num)

    Output

    On executing the above program, you will get the following results −

    Factorial of 4:  24
    
  • Custom Exceptions

    What are Custom Exceptions in Python?

    Python custom exceptions are user-defined error classes that extend the base Exception class. Developers can define and handle specific error conditions that are unique to their application. Developers can improve their code by creating custom exceptions. This allows for more meaningful error messages and facilitates the debugging process by indicating what kind of error occurred and where it originated.

    To define a unique exception we have to typically create a new class that takes its name from Python’s built-in Exception class or one of its subclasses. A corresponding except block can be used to raise this custom class and catch it.

    Developers can control the flow of the program when specific errors occur and take appropriate actions such as logging the error, retrying operations or gracefully shutting down the application. Custom exceptions can carry additional context or data about the error by overriding the __init__ method and storing extra information as instance attributes.

    Using custom exceptions improves the clarity of error handling in complex programs. It helps to distinguish between different types of errors that may require different handling strategies. For example when a file parsing library might define exceptions like FileFormatError, MissingFieldError or InvalidFieldError to handle various issues that can arise during file processing. This level of granularity allows the client code to catch and address specific issues more effectively by improving the robustness and user experience of the software. Python’s custom exceptions are a great tool for handling errors and writing better with more expressive code.

    Why to Use Custom Exceptions?

    Custom exceptions in Python offer several advantages which enhances the functionality, readability and maintainability of our code. Here are the key reasons for using custom exceptions −

    • Specificity: Custom exceptions allow us to represent specific error conditions that are unique to our application.
    • Clarity: They make the code more understandable by clearly describing the nature of the errors.
    • Granularity: Custom exceptions allow for more precise error handling.
    • Consistency: They help to maintain a consistent error-handling strategy across the codebase.

    Creating Custom Exceptions

    Creating custom exceptions in Python involves defining new exception classes that extend from Python’s built-in Exception class or any of its subclasses. This allows developers to create specialized error types that cater to specific scenarios within their applications. Here’s how we can create and use custom exceptions effectively −

    Define the Custom Exception Class

    We can start creating the custom exceptions by defining a new class that inherits from Exception or another exception class such as RuntimeError, ValueError, etc. depending on the nature of the error.

    Following is the example of defining the custom exception class. In this example CustomError is a custom exception class that inherits from Exception. The __init__ method initializes the exception with an optional error message −

    classCustomError(Exception):def__init__(self, message):super().__init__(message)
    
      self.message = message

    Raise the Custom Exception

    To raise the custom exception we can use the raise statement followed by an instance of our custom exception class. Optionally we can pass a message to provide context about the error.

    In this function process_data() the CustomError exception is raised when the data parameter is empty by indicating a specific error condition.

    defprocess_data(data):ifnot data:raise CustomError("Empty data provided")# Processing logic here

    Handle the Custom Exception

    To handle the custom exception we have to use a try-except block. Catch the custom exception class in the except block and handle the error as needed.

    Here in the below code if process_data([]) raises a CustomError then the except block catches it and we can print the error message (e.message) or perform other error-handling tasks.

    try:
       process_data([])except CustomError as e:print(f"Custom Error occurred: {e.message}")# Additional error handling logic

    Example of Custom Exception

    Following is the basic example of custom exception handling in Python. In this example we define a custom exception by subclassing the built-in Exception class and use a try-except block to handle the custom exception −

    # Define a custom exceptionclassCustomError(Exception):def__init__(self, message):
    
      self.message = message
      super().__init__(self.message)# Function that raises the custom exceptiondefcheck_value(value):if value &lt;0:raise CustomError("Value cannot be negative!")else:returnf"Value is {value}"# Using the function with exception handlingtry:
    result = check_value(-5)print(result)except CustomError as e:print(f"Caught an exception: {e.message}")

    Output

    On executing the above code we will get the following output −

    Caught an exception: Value cannot be negative!
    
  • Abstract Base Classes

    An Abstract Base Class (ABC) in Python is a class that cannot be instantiated directly and is intended to be subclassed. ABCs serve as blueprints for other classes by providing a common interface that all subclasses must implement.

    They are a fundamental part of object-oriented programming in Python which enables the developers to define and enforce a consistent API for a group of related classes.

    Purpose of Abstract Base Classes

    Heres an in-depth look at the purpose and functionality of the Abstract Base Classes of Python −

    Defining a Standard Interface

    Abstract Base Class (ABC) allow us to define a blueprint for other classes. This blueprint ensures that any class deriving from the Abstract Base Class (ABC)implements certain methods by providing a consistent interface.

    Following is the example code of defining the standard Interface of the Abstract Base Class in Python −

    from abc import ABC, abstractmethod
    
    classShape(ABC):@abstractmethoddefarea(self):pass@abstractmethoddefperimeter(self):pass

    Enforcing Implementation

    When a class inherits from an Abstract Base Class (ABC) it must implement all abstract methods. If it doesn’t then Python will raise a TypeError. Here is the example of enforcing implementation of the Abstract Base Class in Python −

    classRectangle(Shape):def__init__(self, width, height):
    
      self.width = width
      self.height = height
    defarea(self):return self.width * self.height defperimeter(self):return2*(self.width + self.height)# This will work rect = Rectangle(5,10)# This will raise TypeErrorclassIncompleteShape(Shape):pass

    Providing a Template for Future Development

    Abstract Base Class (ABC) is useful in large projects where multiple developers might work on different parts of the codebase. They provide a clear template for developers to follow which ensure consistency and reducing errors.

    Facilitating Polymorphism

    Abstract Base Class (ABC) make polymorphism possible by enabling the development of code that can operate with objects from diverse classes as long as they conform to a specific interface. This capability streamlines the extension and upkeep of code.

    Below is the example of Facilitating Polymorphism in Abstract Base Class of Python −

    defprint_shape_info(shape: Shape):print(f"Area: {shape.area()}")print(f"Perimeter: {shape.perimeter()}")
    
    square = Rectangle(4,4)
    print_shape_info(square)

    Note: To execute the above mentioned example codes, it is necessary to define the standard interface and Enforcing Implementation.

    Components of Abstract Base Classes

    Abstract Base Classes (ABCs) in Python consist of several key components that enable them to define and enforce interfaces for subclasses.

    These components include the ABC class, the abstractmethod decorator and several others that help in creating and managing abstract base classes. Here are the key components of Abstract Base Classes −

    • ABC Class: This class from Python’s Abstract Base Classes (ABCs)module serves as the foundation for creating abstract base classes. Any class derived from ABC is considered an abstract base class.
    • ‘abstractmethod’ Decorator: This decorator from the abc module is used to declare methods as abstract. These methods do not have implementations in the ABC and must be overridden in derived classes.
    • ‘ABCMeta’ Metaclass: This is the metaclass used by ABC. It is responsible for keeping track of which methods are abstract and ensuring that instances of the abstract base class cannot be created if any abstract methods are not implemented.
    • Concrete Methods in ABCs: Abstract base classes can also define concrete methods that provide a default implementation. These methods can be used or overridden by subclasses.
    • Instantiation Restrictions: A key feature of ABCs is that they cannot be instantiated directly if they have any abstract methods. Attempting to instantiate an ABC with unimplemented abstract methods will raise a ‘TypeError’.
    • Subclass Verification: Abstract Base Classes (ABCs) can verify if a given class is a subclass using the issubclass function and can check instances with the isinstance function.

    Example of Abstract Base Classes in Python

    Following example shows how ABCs enforce method implementation, support polymorphism and provide a clear and consistent interface for related classes −

    from abc import ABC, abstractmethod
    
    classShape(ABC):@abstractmethoddefarea(self):pass@abstractmethoddefperimeter(self):passdefdescription(self):return"I am a shape."classRectangle(Shape):def__init__(self, width, height):
    
      self.width = width
      self.height = height
    defarea(self):return self.width * self.height defperimeter(self):return2*(self.width + self.height)classCircle(Shape):def__init__(self, radius):
      self.radius = radius
    defarea(self):import math
      return math.pi * self.radius **2defperimeter(self):import math
      return2* math.pi * self.radius
    defprint_shape_info(shape):print(shape.description())print(f"Area: {shape.area()}")print(f"Perimeter: {shape.perimeter()}") shapes =[Rectangle(5,10), Circle(7)]for shape in shapes: print_shape_info(shape)print("-"*20)classIncompleteShape(Shape):passtry: incomplete_shape = IncompleteShape()except TypeError as e:print(e)

    Output

    On executing the above code we will get the following output −

    I am a shape.
    Area: 50
    Perimeter: 30
    --------------------
    I am a shape.
    Area: 153.93804002589985
    Perimeter: 43.982297150257104
    --------------------
    Can't instantiate abstract class IncompleteShape with abstract methods area, perimeter
    
  • GUIs

    n this chapter, you will learn about some popular Python IDEs (Integrated Development Environment), and how to use IDE for program development.

    To use the scripted mode of Python, you need to save the sequence of Python instructions in a text file and save it with .py extension. You can use any text editor available on the operating system. Whenever the interpreter encounters errors, the source code needs to be edited and run again and again. To avoid this tedious method, IDE is used. An IDE is a one stop solution for typing, editing the source code, detecting the errors and executing the program.

    IDLE

    Python’s standard library contains the IDLE module. IDLE stands for Integrated Development and Learning Environment. As the name suggests, it is useful when one is in the learning stage. It includes a Python interactive shell and a code editor, customized to the needs of Python language structure. Some of its important features include syntax highlighting, auto-completion, customizable interface etc.

    To write a Python script, open a new text editor window from the File menu.

    idle_module

    A new editor window opens in which you can enter the Python code. Save it and run it with Run menu.

    new_window

    Jupyter Notebook

    Initially developed as a web interface for IPython, Jupyter Notebook supports multiple languages. The name itself derives from the alphabets from the names of the supported languages − Julia, PYThon and R. Jupyter notebook is a client server application. The server is launched at the localhost, and the browser acts as its client.

    Install Jupyter notebook with PIP −

    pip3 install jupyter
    

    Invoke from the command line.

    C:\Users\Acer>jupyter notebook
    

    The server is launched at localhost’s 8888 port number.

    server_launched

    The default browser of your system opens a link http://localhost:8888/tree to display the dashboard.

    jupyter

    Open a new Python notebook. It shows IPython style input cell. Enter Python instructions and run the cell.

    python_notebook

    Jupyter notebook is a versatile tool, used very extensively by data scientists to display inline data visualizations. The notebook can be conveniently converted and distributed in PDF, HTML or Markdown format.

    VS Code

    Microsoft has developed a source code editor called VS Code (Visual Studio Code) that supports multiple languages including C++, Java, Python and others. It provides features such as syntax highlighting, autocomplete, debugger and version control.

    VS Code is a freeware. It is available for download and install from https://code.visualstudio.com/.

    Launch VS Code from the start menu (in Windows).

    vs_code_window

    You can also launch VS Code from command line −

    C:\test>code .

    VS Code cannot be used unless respective language extension is not installed. VS Code Extensions marketplace has a number of extensions for language compilers and other utilities. Search for Python extension from the Extension tab (Ctrl+Shift+X) and install it.

    VS_Code_Extensions

    After activating Python extension, you need to set the Python interpreter. Press Ctrl+Shift+P and select Python interpreter.

    select_interpreter

    Open a new text file, enter Python code and save the file.

    python_code_file

    Open a command prompt terminal and run the program.

    command_prompt_terminal

    PyCharm

    PyCharm is another popular Python IDE. It has been developed by JetBrains, a Czech software company. Its features include code analysis, a graphical debugger, integration with version control systems etc. PyCharm supports web development with Django.

    The community as well as professional editions can be downloaded from https://www.jetbrains.com/pycharm/download.

    Download, install the latest Version: 2022.3.2 and open PyCharm. The Welcome screen appears as below −

    welcome_to_pycharm

    When you start a new project, PyCharm creates a virtual environment for it based on the choice of folder location and the version of Python interpreter chosen.

    new_project

    You can now add one or more Python scripts required for the project. Here we add a sample Python code in main.py file.

    python_project

    To execute the program, choose from Run menu or use Shift+F10 shortcut.

    run_the_program

    Output will be displayed in the console window as shown below −

    output_displayed
  • Tools Utilities

    The standard library comes with a number of modules that can be used both as modules and as command-line utilities.

    The dis Module

    The dis module is the Python disassembler. It converts byte codes to a format that is slightly more appropriate for human consumption.

    You can run the disassembler from the command line. It compiles the given script and prints the disassembled byte codes to the STDOUT. You can also use dis as a module. The dis function takes a class, method, function or code object as its single argument.

    Example

    import dis
    defsum():
       vara =10
       varb =20sum= vara + varb
       print("vara + varb = %d"%sum)# Call dis function for the function.
    dis.dis(sum)

    This would produce the following result −

      3           0 LOAD_CONST               1 (10)
    
              2 STORE_FAST               0 (vara)
    4 4 LOAD_CONST 2 (20)
              6 STORE_FAST               1 (varb)
    6 8 LOAD_FAST 0 (vara)
             10 LOAD_FAST                1 (varb)
             12 BINARY_ADD
             14 STORE_FAST               2 (sum)
    7 16 LOAD_GLOBAL 0 (print)
             18 LOAD_CONST               3 ('vara + varb = %d')
             20 LOAD_FAST                2 (sum)
             22 BINARY_MODULO
             24 CALL_FUNCTION            1
             26 POP_TOP
             28 LOAD_CONST               0 (None)
             30 RETURN_VALUE

    The pdb Module

    The pdb module is the standard Python debugger. It is based on the bdb debugger framework.

    You can run the debugger from the command line (type n [or next] to go to the next line and help to get a list of available commands) −

    Example

    Before you try to run pdb.py, set your path properly to Python lib directory. So let us try with above example sum.py −

    $pdb.py sum.py
    >/test/sum.py(3)<module>()->import dis
    (Pdb) n
    >/test/sum.py(5)<module>()->defsum():(Pdb) n
    >/test/sum.py(14)<module>()-> dis.dis(sum)(Pdb) n
      60 LOAD_CONST               1(10)3 STORE_FAST               0(vara)76 LOAD_CONST               2(20)9 STORE_FAST               1(varb)912 LOAD_FAST                0(vara)15 LOAD_FAST                1(varb)18 BINARY_ADD
    
             19 STORE_FAST               2(sum)1022 LOAD_CONST               3('vara + varb = %d')25 LOAD_FAST                2(sum)28 BINARY_MODULO               
             29 PRINT_ITEM
             30 PRINT_NEWLINE            
             31 LOAD_CONST               0(None)34 RETURN_VALUE                         
    --Return-->/test/sum.py(14)<module>()->None-v dis.dis(sum)(Pdb) n --Return--><string>(1)<module>()->None(Pdb)

    The profile Module

    The profile module is the standard Python profiler. You can run the profiler from the command line −

    Example

    Let us try to profile the following program −

    vara =10
    varb =20sum= vara + varb
    print"vara + varb = %d"%sum

    Now, try running cProfile.py over this file sum.py as follow −

    $cProfile.py sum.py
    vara + varb =304 function calls in0.000 CPU seconds
       Ordered by: standard name
    ncalls   tottime  percall  cumtime  percall filename:lineno
     10.0000.0000.0000.000<string>:1(<module>)10.0000.0000.0000.000sum.py:3(<module>)10.0000.0000.0000.000{execfile}10.0000.0000.0000.000{method ......}

    The tabnanny Module

    The tabnanny module checks Python source files for ambiguous indentation. If a file mixes tabs and spaces in a way that throws off indentation, no matter what tab size you’re using, the nanny complains.

    Example

    Let us try to profile the following program −

    vara =10
    varb =20sum= vara + varb
    print"vara + varb = %d"%sum

    If you would try a correct file with tabnanny.py, then it won’t complain as follows −

    $tabnanny.py -v sum.py
    'sum.py': Clean bill of health.
    
  • Further Extensions

    Any code that you write using any compiled language like C, C++, or Java can be integrated or imported into another Python script. This code is considered as an “extension.”

    A Python extension module is nothing more than a normal C library. On Unix machines, these libraries usually end in .so (for shared object). On Windows machines, you typically see .dll (for dynamically linked library).

    Pre-Requisites for Writing Extensions

    To start writing your extension, you are going to need the Python header files.

    • On Unix machines, this usually requires installing a developer-specific package.
    • Windows users get these headers as part of the package when they use the binary Python installer.

    Additionally, it is assumed that you have a good knowledge of C or C++ to write any Python Extension using C programming.

    First look at a Python Extension

    For your first look at a Python extension module, you need to group your code into four parts −

    • The header file Python.h.
    • The C functions you want to expose as the interface from your module..
    • A table mapping the names of your functions as Python developers see them as C functions inside the extension module..
    • An initialization function.

    The Header File Python.h

    You need to include Python.h header file in your C source file, which gives you the access to the internal Python API used to hook your module into the interpreter.

    Make sure to include Python.h before any other headers you might need. You need to follow the includes with the functions you want to call from Python.

    The C Functions

    The signatures of the C implementation of your functions always takes one of the following three forms −

    static PyObject *MyFunction(PyObject *self, PyObject *args);
    static PyObject *MyFunctionWithKeywords(PyObject *self,
       PyObject *args,
       PyObject *kw);
    static PyObject *MyFunctionWithNoArgs(PyObject *self);

    Each one of the preceding declarations returns a Python object. There is no such thing as a void function in Python as there is in C. If you do not want your functions to return a value, return the C equivalent of Python’s None value. The Python headers define a macro, Py_RETURN_NONE, that does this for us.

    The names of your C functions can be whatever you like as they are never seen outside of the extension module. They are defined as static function.

    Your C functions usually are named by combining the Python module and function names together, as shown here −

    static PyObject *module_func(PyObject *self, PyObject *args){/* Do your stuff here.*/
       Py_RETURN_NONE;}

    This is a Python function called func inside the module module. You will be putting pointers to your C functions into the method table for the module that usually comes next in your source code.

    The Method Mapping Table

    This method table is a simple array of PyMethodDef structures. That structure looks something like this −

    struct PyMethodDef {
       char *ml_name;
       PyCFunction ml_meth;int ml_flags;
       char *ml_doc;};

    Here is the description of the members of this structure −

    • ml_name − This is the name of the function as the Python interpreter presents when it is used in Python programs.
    • ml_meth − This is the address of a function that has any one of the signatures, described in the previous section.
    • ml_flags − This tells the interpreter which of the three signatures ml_meth is using.
      • This flag usually has a value of METH_VARARGS.
      • This flag can be bitwise OR’ed with METH_KEYWORDS if you want to allow keyword arguments into your function.
      • This can also have a value of METH_NOARGS that indicates you do not want to accept any arguments.
    • mml_doc − This is the docstring for the function, which could be NULL if you do not feel like writing one.

    This table needs to be terminated with a sentinel that consists of NULL and 0 values for the appropriate members.

    Example

    For the above-defined function, we have the following method mapping table −

    static PyMethodDef module_methods[]={{"func",(PyCFunction)module_func, METH_NOARGS, NULL },{ NULL, NULL,0, NULL }};

    The Initialization Function

    The last part of your extension module is the initialization function. This function is called by the Python interpreter when the module is loaded. It is required that the function be named initModule, where Module is the name of the module.

    The initialization function needs to be exported from the library you will be building. The Python headers define PyMODINIT_FUNC to include the appropriate incantations for that to happen for the particular environment in which we are compiling. All you have to do is use it when defining the function.

    Your C initialization function generally has the following overall structure −

    PyMODINIT_FUNC initModule(){
       Py_InitModule3(func, module_methods,"docstring...");}

    Here is the description of Py_InitModule3 function −

    • func − This is the function to be exported.
    • module_methods − This is the mapping table name defined above.
    • docstring − This is the comment you want to give in your extension.

    Putting all this together, it looks like the following −

    #include <Python.h>
    static PyObject *module_func(PyObject *self, PyObject *args){/* Do your stuff here.*/
       Py_RETURN_NONE;}
    static PyMethodDef module_methods[]={{"func",(PyCFunction)module_func, METH_NOARGS, NULL },{ NULL, NULL,0, NULL }};
    PyMODINIT_FUNC initModule(){
       Py_InitModule3(func, module_methods,"docstring...");}

    Example

    A simple example that makes use of all the above concepts −

    #include <Python.h>
    static PyObject* helloworld(PyObject* self){return Py_BuildValue("s","Hello, Python extensions!!");}
    static char helloworld_docs[]="helloworld( ): Any message you want to put here!!\n";
    static PyMethodDef helloworld_funcs[]={{"helloworld",(PyCFunction)helloworld,
       METH_NOARGS, helloworld_docs},{NULL}};
    void inithelloworld(void){
       Py_InitModule3("helloworld", helloworld_funcs,"Extension module example!");}

    Here the Py_BuildValue function is used to build a Python value. Save above code in hello.c file. We would see how to compile and install this module to be called from Python script.

    Building and Installing Extensions

    The distutils package makes it very easy to distribute Python modules, both pure Python and extension modules, in a standard way. Modules are distributed in the source form, built and installed via a setup script usually called setup.pyas.

    For the above module, you need to prepare the following setup.py script −

    from distutils.core import setup, Extension
    setup(name='helloworld', version='1.0', \
       ext_modules=[Extension('helloworld',['hello.c'])])

    Now, use the following command, which would perform all needed compilation and linking steps, with the right compiler and linker commands and flags, and copies the resulting dynamic library into an appropriate directory −

    $ python setup.py install
    

    On Unix-based systems, you will most likely need to run this command as root in order to have permissions to write to the site-packages directory. This usually is not a problem on Windows.

    Importing Extensions

    Once you install your extensions, you would be able to import and call that extension in your Python script as follows −

    import helloworld
    print helloworld.helloworld()

    This would produce the following output −

    Hello, Python extensions!!
    

    Passing Function Parameters

    As you will most likely want to define functions that accept arguments, you can use one of the other signatures for your C functions. For example, the following function, that accepts some number of parameters, would be defined like this −

    static PyObject *module_func(PyObject *self, PyObject *args){/* Parse args and do something interesting here.*/
       Py_RETURN_NONE;}

    The method table containing an entry for the new function would look like this −

    static PyMethodDef module_methods[]={{"func",(PyCFunction)module_func, METH_NOARGS, NULL },{"func", module_func, METH_VARARGS, NULL },{ NULL, NULL,0, NULL }};

    You can use the API PyArg_ParseTuple function to extract the arguments from the one PyObject pointer passed into your C function.

    The first argument to PyArg_ParseTuple is the args argument. This is the object you will be parsing. The second argument is a format string describing the arguments as you expect them to appear. Each argument is represented by one or more characters in the format string as follows.

    static PyObject *module_func(PyObject *self, PyObject *args){int i;
       double d;
       char *s;if(!PyArg_ParseTuple(args,"ids",&i,&d,&s)){return NULL;}/* Do something interesting here.*/
       Py_RETURN_NONE;}

    Compiling the new version of your module and importing it enables you to invoke the new function with any number of arguments of any type −

    module.func(1, s="three", d=2.0)
    module.func(i=1, d=2.0, s="three")
    module.func(s="three", d=2.0, i=1)

    You can probably come up with even more variations.

    The PyArg_ParseTuple Function

    re is the standard signature for the PyArg_ParseTuple function −

    int PyArg_ParseTuple(PyObject*tuple,char*format,...)

    This function returns 0 for errors, and a value not equal to 0 for success. Tuple is the PyObject* that was the C function’s second argument. Here format is a C string that describes mandatory and optional arguments.

    Here is a list of format codes for the PyArg_ParseTuple function −

    CodeC typeMeaning
    ccharA Python string of length 1 becomes a C char.
    ddoubleA Python float becomes a C double.
    ffloatA Python float becomes a C float.
    iintA Python int becomes a C int.
    llongA Python int becomes a C long.
    Llong longA Python int becomes a C long long.
    OPyObject*Gets non-NULL borrowed reference to Python argument.
    Schar*Python string without embedded nulls to C char*.
    s#char*+intAny Python string to C address and length.
    t#char*+intRead-only single-segment buffer to C address and length.
    uPy_UNICODE*Python Unicode without embedded nulls to C.
    u#Py_UNICODE*+intAny Python Unicode C address and length.
    w#char*+intRead/write single-segment buffer to C address and length.
    zchar* Like s, also accepts None (sets C char* to NULL).
    z#char*+intLike s#, also accepts None (sets C char* to NULL).
    (…)as per …A Python sequence is treated as one argument per item.
    |The following arguments are optional.
    :Format end, followed by function name for error messages.
    ;Format end, followed by entire error message text.

    Returning Values

    Py_BuildValue takes in a format string much like PyArg_ParseTuple does. Instead of passing in the addresses of the values you are building, you pass in the actual values. Here is an example showing how to implement an add function.

    static PyObject *foo_add(PyObject *self, PyObject *args){int a;int b;if(!PyArg_ParseTuple(args,"ii",&a,&b)){return NULL;}return Py_BuildValue("i", a + b);}

    This is what it would look like if implemented in Python −

    defadd(a, b):return(a + b)

    You can return two values from your function as follows. This would be captured using a list in Python.

    static PyObject *foo_add_subtract(PyObject *self, PyObject *args){int a;int b;if(!PyArg_ParseTuple(args,"ii",&a,&b)){return NULL;}return Py_BuildValue("ii", a + b, a - b);}

    This is what it would look like if implemented in Python −

    defadd_subtract(a, b):return(a + b, a - b)

    The Py_BuildValue Function

    Here is the standard signature for Py_BuildValue function −

    PyObject* Py_BuildValue(char*format,...)

    Here format is a C string that describes the Python object to build. The following arguments of Py_BuildValue are C values from which the result is built. ThePyObject* result is a new reference.

    The following table lists the commonly used code strings, of which zero or more are joined into a string format.

    CodeC typeMeaning
    ccharA C char becomes a Python string of length 1.
    ddoubleA C double becomes a Python float.
    ffloatA C float becomes a Python float.
    iintC int becomes a Python int
    llongA C long becomes a Python int
    NPyObject*Passes a Python object and steals a reference.
    OPyObject*Passes a Python object and INCREFs it as normal.
    O&convert+void*Arbitrary conversion
    schar*C 0-terminated char* to Python string, or NULL to None.
    s#char*+intC char* and length to Python string, or NULL to None.
    uPy_UNICODE*C-wide, null-terminated string to Python Unicode, or NULL to None.
    u#Py_UNICODE*+intC-wide string and length to Python Unicode, or NULL to None.
    w#char*+intRead/write single-segment buffer to C address and length.
    zchar*Like s, also accepts None (sets C char* to NULL).
    z#char*+intLike s#, also accepts None (sets C char* to NULL).
    (…)as per …Builds Python tuple from C values.
    […] as per …Builds Python list from C values.
    {…}as per …Builds Python dictionary from C values, alternating keys and values.

    Code {…} builds dictionaries from an even number of C values, alternately keys and values. For example, Py_BuildValue(“{issi}”,23,”zig”,”zag”,42) returns a dictionary like Python’s {23:’zig’,’zag’:42}