<liclass="toctree-l2"><aclass="reference internal"href="desktop.html#difference-between-images-and-containers">Difference between images and containers</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="desktop.html#installation-in-windows">Installation in Windows</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="desktop.html#enable-virtualization-for-windows-machine">Enable Virtualization for Windows Machine</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="Manual.html#step-by-step-installation-in-windows">Step-by-Step installation in Windows</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="Manual.html#activate-virtualization-in-bios">Activate Virtualization in BIOS</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="Manual.html#mounting-user-data-and-running-docker-image">Mounting User data and running Docker image</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="Manual.html#how-to-use-maxwell-td">How to use Maxwell-TD</a></li>
<p>Docker Engine is a powerful tool that simplifies the process of creating, deploying, and managing applications using containers. Here’s an introduction to Docker Engine and its basic functionalities:</p>
<p>Docker Engine is the core component of Docker, a platform that enables developers to package applications and their dependencies into lightweight containers. These containers can then be deployed consistently across different environments, whether it’s a developer’s laptop, a testing server, or a production system.</p>
<li><p><strong>Containerization:</strong> Docker Engine allows you to create and manage containers, which are isolated environments that package an application and its dependencies. Containers ensure consistency in runtime environments across different platforms.</p></li>
<li><p><strong>Image Management:</strong> Docker uses images as templates to create containers. Docker Engine allows you to build, push, and pull images from Docker registries (like Docker Hub or private registries). Images are typically defined using a Dockerfile, which specifies the environment and setup instructions for the application.</p></li>
<li><p><strong>Container Lifecycle Management:</strong> Docker Engine provides commands to start, stop, restart, and remove containers. It also manages the lifecycle of containers, including monitoring their status and resource usage.</p></li>
<li><p><strong>Networking:</strong> Docker Engine facilitates networking between containers and between containers and the outside world. It provides mechanisms for containers to communicate with each other and with external networks, as well as configuring networking options like ports and IP addresses.</p></li>
<li><p><strong>Storage Management:</strong> Docker Engine manages storage volumes that persist data generated by containers. It supports various storage drivers and allows you to attach volumes to containers, enabling data persistence and sharing data between containers and the host system.</p></li>
<li><p><strong>Resource Isolation and Utilization:</strong> Docker Engine uses Linux kernel features (such as namespaces and control groups) to provide lightweight isolation and resource utilization for containers. This ensures that containers run efficiently without interfering with each other or with the host system.</p></li>
<li><p><strong>Integration with Orchestration Tools:</strong> Docker Engine can be integrated with orchestration tools like Docker Swarm and Kubernetes for managing containerized applications at scale. Orchestration tools automate container deployment, scaling, and load balancing across multiple hosts.</p></li>
<li><p><strong>Consistency:</strong> Docker ensures consistency between development, testing, and production environments by encapsulating applications and dependencies into containers.</p></li>
<li><p><strong>Efficiency:</strong> Containers are lightweight and share the host system’s kernel, reducing overhead and improving performance compared to traditional virtual machines.</p></li>
<li><p><strong>Portability:</strong> Docker containers can run on any platform that supports Docker, making it easy to move applications between different environments.</p></li>
<li><p><strong>Isolation:</strong> Containers provide a level of isolation that enhances security and stability, as each container operates independently of others on the same host.</p></li>
<p><spanclass="caption-text">Placing Dockerfile in the directory where we want to create Docker image</span><aclass="headerlink"href="#id1"title="Link to this image"></a></p>
<p>We can examine the contents of the Dockerfile.</p>
<p>We’re creating a Docker image by starting from an existing Docker image that includes a UNIX environment with CUDA runtime. Initially, we pull the base image from Docker’s official repository. Specifically, we can locate suitable base images by searching on <aclass="reference external"href="https://hub.docker.com/search?q=nvidia%2Fcuda">Docker Hub</a> (<aclass="reference external"href="https://hub.docker.com/search?q=nvidia%2Fcuda">https://hub.docker.com/search?q=nvidia%2Fcuda</a>).</p>
<p><spanclass="caption-text">Find base images online via <cite>Docker Hub <https://hub.docker.com/search?q=nvidia%2Fcuda></cite></span><aclass="headerlink"href="#id2"title="Link to this image"></a></p>
<p>In addition, setting <codeclass="docutils literal notranslate"><spanclass="pre">ENV</span><spanclass="pre">DEBIAN_FRONTEND=noninteractive</span></code> in a Dockerfile is a directive that adjusts the environment variable <codeclass="docutils literal notranslate"><spanclass="pre">DEBIAN_FRONTEND</span></code> within the Docker container during the image build process.</p>
<p>Debian-based Linux distributions, including many Docker base images, use DEBIAN_FRONTEND to determine how certain package management tools (like apt-get) interact with users.
Setting DEBIAN_FRONTEND=noninteractive tells these tools to run in a non-interactive mode. In this mode, the tools assume default behavior for prompts that would normally require user input, such as during package installation or configuration.</p>
<ulclass="simple">
<li><p><strong>Avoiding User Prompts</strong></p></li>
</ul>
<p>During Docker image builds, it’s crucial to automate as much as possible to ensure consistency and reproducibility.
Without setting DEBIAN_FRONTEND=noninteractive, package installations might prompt for user input (e.g., to confirm installation, choose configuration options). This interaction halts the build process unless explicitly handled in advance.</p>
<ulclass="simple">
<li><p><strong>Common Usage in Dockerfiles</strong></p></li>
</ul>
<p>In Dockerfiles, especially those designed for automated builds (CI/CD pipelines, batch processes), it’s typical to include ENV DEBIAN_FRONTEND=noninteractive early on. This ensures that subsequent commands relying on package management tools proceed without waiting for user input.</p>
<p>Next, we will need to install packages, libraries and set the environment variables that we need to compile or run Maxwell-TD.</p>
<p>his Dockerfile snippet outlines steps for setting up a Docker image with various libraries and tools typically required for scientific computing and development environments. Let’s break down each part:</p>
<ulclass="simple">
<li><dlclass="simple">
<dt><strong>Update system and install libraries</strong></dt><dd><ul>
<li><p><strong>Purpose</strong>: Updates the package list and installs a set of essential libraries and tools required for compiling and building various applications.</p></li>
<li><p><strong>build-essential, g++, gcc</strong>: Compiler tools and libraries.</p></li>
<li><p><strong>cmake, gfortran</strong>: Build and Fortran compiler.</p></li>
<li><p>Various development libraries (libopenblas-dev, liblapack-dev, libfftw3-dev, etc.) for numerical computations, linear algebra, and scientific computing.</p></li>
<li><p><strong>libvtk7-dev</strong>: Libraries for 3D computer graphics, visualization, and image processing.</p></li>
<li><p><strong>libgomp1, libomp-dev, libpthread-stubs0-dev</strong>: Libraries for multi-threading support.</p></li>
</ul>
</dd>
</dl>
</li>
</ul>
</dd>
</dl>
</li>
<li><dlclass="simple">
<dt><strong>Install Compilers</strong></dt><dd><p>-<strong>Purpose</strong>: Ensures that g++ and gcc are installed. These are essential compilers for C++ and C programming languages, often needed for compiling native code.</p>
</dd>
</dl>
</li>
<li><dlclass="simple">
<dt><strong>Install Python and pip</strong></dt><dd><p>-<strong>Purpose</strong>: Installs Python 3 and pip (Python package installer), which are essential for Python-based applications and managing Python dependencies.</p>
</dd>
</dl>
</li>
<li><dlclass="simple">
<dt><strong>Copy current directory to docker image</strong></dt><dd><p>-<strong>Purpose</strong>: Sets the working directory inside the Docker image to <codeclass="docutils literal notranslate"><spanclass="pre">/dgtd</span></code> and copies all files from the current directory (presumably where the Dockerfile resides) into the <codeclass="docutils literal notranslate"><spanclass="pre">/dgtd</span></code> directory inside the Docker image.</p>
</dd>
</dl>
</li>
<li><dlclass="simple">
<dt><strong>Install Python dependencies</strong></dt><dd><p>-<strong>Purpose</strong>: Installs Python dependencies listed in <codeclass="docutils literal notranslate"><spanclass="pre">requirements.txt</span></code> file located in the /dgtd directory. The <codeclass="docutils literal notranslate"><spanclass="pre">--no-cache-dir</span></code> flag ensures that no cached packages are used during installation, which can be important for Docker images to maintain consistency and avoid unexpected behavior.</p>
</dd>
</dl>
</li>
<li><dlclass="simple">
<dt><strong>Set Path for libraries and CUDA</strong></dt><dd><p>-<strong>Purpose</strong>: Sets environment variables related to CUDA (a parallel computing platform and programming model) if CUDA is used in the project. These variables define paths to CUDA libraries, binaries, headers, and compiler (<codeclass="docutils literal notranslate"><spanclass="pre">nvcc</span></code>).</p>
<p>Finally, we can compile a program using cmake and make, and then sets up the Docker container to start a Bash shell upon running. <codeclass="docutils literal notranslate"><spanclass="pre">CMD</span><spanclass="pre">["bash"]</span></code> sets the default command to run inside the container. When the container is started without specifying a command, it will automatically launch a Bash shell.</p>
<li><p>Use the <codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">build</span></code> command to build the Docker image from your Dockerfile.</p></li>
</ol>
<p>Once you have created your Dockerfile and saved it in your project directory, you can build a Docker image using the docker build command. Here’s how you would do it:</p>
<li><p><codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">build</span></code>: This command tells Docker to build an image from a Dockerfile.</p></li>
<li><p><codeclass="docutils literal notranslate"><spanclass="pre">-t</span><spanclass="pre">maxwell_td_image</span></code>: The -t flag is used to tag the image with a name (maxwell_td_image in this case). This name can be whatever you choose and is used to refer to this specific image later on.</p></li>
<li><p><codeclass="docutils literal notranslate"><spanclass="pre">.</span></code>: This specifies the build context. The dot indicates that the Dockerfile and any other files needed for building the image are located in the current directory.</p></li>
<p>This command will list all Docker images that are currently present on your local system.
Each image listed will have columns showing its repository, tag, image ID, creation date, and size.</p>
<ulclass="simple">
<li><p>Finding your image</p></li>
</ul>
<p>Look through the list for the image you just built. If it was successfully built, it should appear in the list.
Check the repository and tag names to identify your specific image. The repository name will likely be the name you assigned to it in your Dockerfile, and the tag will be latest or another tag you specified.</p>
<ulclass="simple">
<li><p>Confirming successful build</p></li>
</ul>
<p>If your image appears in the list with the correct details (repository name, tag, etc.), it indicates that Docker successfully built and stored the image on your local machine.</p>
<p>To save a Docker image locally as a tar archive, you’ll use the docker save command. This command packages the Docker image into a tarball archive that can be transferred to other machines or stored for backup purposes. Here’s how you can do it:</p>
<ulclass="simple">
<li><p>Open your terminal (Command Prompt on Windows or Terminal on macOS/Linux).</p></li>
<p>Replace <codeclass="docutils literal notranslate"><spanclass="pre"><output-file-name>.zip</span></code> with the desired name for your zip archive file.</p>
<p><image-name>: This specifies the Docker image you want to save.</p>
<ulclass="simple">
<li><p>Confirmation:</p></li>
</ul>
<p>After running the command, Docker will package the specified image into zip named <output-file-name>.zip.
You should see the zipfile ( <output-file-name>.zip) in your current directory unless you specified a different path for the output.</p>
<li><p>Transportability: The generated zip can be transferred to another machine or stored for future use. This is useful for deploying Docker images across different environments without needing to rebuild them.</p></li>
<li><p>File Size: Depending on the size of your Docker image, the resulting zip archive can be quite large. Ensure you have enough disk space and consider compression techniques if transferring over networks with limited bandwidth.</p></li>
<li><p>Loading the Image: To use the saved zip file on another machine, you’ll need to load it into Docker using the docker load command. Here’s how you can do that:</p></li>
<p>To remove specific images, list them using <codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">images</span><spanclass="pre">-a</span></code> and then delete them with <codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">rmi</span></code>:</p>
<divclass="highlight-bash notranslate"><divclass="highlight"><pre><span></span>docker<spanclass="w"></span>images<spanclass="w"></span>-a<spanclass="w"></span><spanclass="c1"># List all images</span>
docker<spanclass="w"></span>rmi<spanclass="w"></span>Image<spanclass="w"></span>Image<spanclass="w"></span><spanclass="c1"># Remove specific images by ID or tag</span>
<p>To remove specific containers, list them using <codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">ps</span><spanclass="pre">-a</span></code> and then delete them with <codeclass="docutils literal notranslate"><spanclass="pre">docker</span><spanclass="pre">rm</span></code>:</p>
<divclass="highlight-bash notranslate"><divclass="highlight"><pre><span></span>docker<spanclass="w"></span>ps<spanclass="w"></span>-a<spanclass="w"></span><spanclass="c1"># List all containers</span>
docker<spanclass="w"></span>rm<spanclass="w"></span>ID_or_Name<spanclass="w"></span>ID_or_Name<spanclass="w"></span><spanclass="c1"># Remove specific containers by ID or name</span>
<divclass="highlight-bash notranslate"><divclass="highlight"><pre><span></span>docker<spanclass="w"></span>volume<spanclass="w"></span>ls<spanclass="w"></span><spanclass="c1"># List all volumes</span>
docker<spanclass="w"></span>volume<spanclass="w"></span>rm<spanclass="w"></span>volume_name<spanclass="w"></span>volume_name<spanclass="w"></span><spanclass="c1"># Remove specific volumes by name</span>