Docker For Mac Inspect Image Layers Rating: 4,4/5 3114 reviews
Estimated reading time: 15 minutes

In macOS and Windows, Docker runs Linux containers in a virtual environment. If you inspect regular images then you will get linux paths like. Different files that represent read-only layers of a Docker image and a layer on. Docker Desktop. The preferred choice for millions of developers that are building containerized apps. Docker Desktop is a tool for MacOS and Windows machines for the building and sharing of containerized applications and microservices. Access Docker Desktop and follow the guided onboarding to build your first containerized application in minutes.

To use storage drivers effectively, it’s important to know how Docker builds andstores images, and how these images are used by containers. You can use thisinformation to make informed choices about the best way to persist data fromyour applications and avoid performance problems along the way.

Storage drivers allow you to create data in the writable layer of your container.The files won’t be persisted after the container is deleted, and both read andwrite speeds are lower than native file system performance.

Note: Operations that are known to be problematic include write-intensive database storage, particularly when pre-existing data exists in the write-only layer. More details are provided in this document.

Learn how to use volumes to persist data and improve performance.

Images and layers

A Docker image is built up from a series of layers. Each layer represents aninstruction in the image’s Dockerfile. Each layer except the very last one isread-only. Consider the following Dockerfile:

This Dockerfile contains four commands, each of which creates a layer. TheFROM statement starts out by creating a layer from the ubuntu:18.04 image.The COPY command adds some files from your Docker client’s current directory.The RUN command builds your application using the make command. Finally,the last layer specifies what command to run within the container.

Each layer is only a set of differences from the layer before it. The layers arestacked on top of each other. When you create a new container, you add a newwritable layer on top of the underlying layers. This layer is often called the“container layer”. All changes made to the running container, such as writingnew files, modifying existing files, and deleting files, are written to this thinwritable container layer. The diagram below shows a container based on the Ubuntu18.04 image.

A storage driver handles the details about the way these layers interact witheach other. Different storage drivers are available, which have advantagesand disadvantages in different situations.

Container and layers

The major difference between a container and an image is the top writable layer.All writes to the container that add new or modify existing data are stored inthis writable layer. When the container is deleted, the writable layer is alsodeleted. The underlying image remains unchanged.

Because each container has its own writable container layer, and all changes arestored in this container layer, multiple containers can share access to the sameunderlying image and yet have their own data state. The diagram below showsmultiple containers sharing the same Ubuntu 18.04 image.

Note: If you need multiple images to have shared access to the exactsame data, store this data in a Docker volume and mount it into yourcontainers.

Docker uses storage drivers to manage the contents of the image layers and thewritable container layer. Each storage driver handles the implementationdifferently, but all drivers use stackable image layers and the copy-on-write(CoW) strategy.

Container size on disk

To view the approximate size of a running container, you can use the docker ps -scommand. Two different columns relate to size.

  • size: the amount of data (on disk) that is used for the writable layer ofeach container.

  • virtual size: the amount of data used for the read-only image dataused by the container plus the container’s writable layer size.Multiple containers may share some or all read-onlyimage data. Two containers started from the same image share 100% of theread-only data, while two containers with different images which have layersin common share those common layers. Therefore, you can’t just total thevirtual sizes. This over-estimates the total disk usage by a potentiallynon-trivial amount.

The total disk space used by all of the running containers on disk is somecombination of each container’s size and the virtual size values. Ifmultiple containers started from the same exact image, the total size on disk forthese containers would be SUM (size of containers) plus one image size(virtual size- size).

This also does not count the following additional ways a container can take updisk space:

  • Disk space used for log files if you use the json-file logging driver. Thiscan be non-trivial if your container generates a large amount of logging dataand log rotation is not configured.
  • Volumes and bind mounts used by the container.
  • Disk space used for the container’s configuration files, which are typicallysmall.
  • Memory written to disk (if swapping is enabled).
  • Checkpoints, if you’re using the experimental checkpoint/restore feature.

The copy-on-write (CoW) strategy

Copy-on-write is a strategy of sharing and copying files for maximum efficiency.If a file or directory exists in a lower layer within the image, and anotherlayer (including the writable layer) needs read access to it, it just uses theexisting file. The first time another layer needs to modify the file (whenbuilding the image or running the container), the file is copied into that layerand modified. This minimizes I/O and the size of each of the subsequent layers.These advantages are explained in more depth below.

Sharing promotes smaller images

When you use docker pull to pull down an image from a repository, or when youcreate a container from an image that does not yet exist locally, each layer ispulled down separately, and stored in Docker’s local storage area, which isusually /var/lib/docker/ on Linux hosts. You can see these layers being pulledin this example:

Each of these layers is stored in its own directory inside the Docker host’slocal storage area. To examine the layers on the filesystem, list the contentsof /var/lib/docker/<storage-driver>. This example uses the overlay2 storage driver:

The directory names do not correspond to the layer IDs (this has been true sinceDocker 1.10).

Now imagine that you have two different Dockerfiles. You use the first one tocreate an image called acme/my-base-image:1.0.

The second one is based on acme/my-base-image:1.0, but has some additionallayers:

The second image contains all the layers from the first image, plus a new layerwith the CMD instruction, and a read-write container layer. Docker alreadyhas all the layers from the first image, so it does not need to pull them again.The two images share any layers they have in common.

If you build images from the two Dockerfiles, you can use docker image ls anddocker history commands to verify that the cryptographic IDs of the sharedlayers are the same.

  1. Make a new directory cow-test/ and change into it.

  2. Within cow-test/, create a new file called hello.sh with the following contents:

    Save the file, and make it executable:

  3. Copy the contents of the first Dockerfile above into a new file calledDockerfile.base.

  4. Copy the contents of the second Dockerfile above into a new file calledDockerfile.

  5. Within the cow-test/ directory, build the first image. Don’t forget toinclude the final . in the command. That sets the PATH, which tellsDocker where to look for any files that need to be added to the image.

  6. Build the second image.

  7. Check out the sizes of the images:

  8. Check out the layers that comprise each image:

    Notice that all the layers are identical except the top layer of the secondimage. All the other layers are shared between the two images, and are onlystored once in /var/lib/docker/. The new layer actually doesn’t take anyroom at all, because it is not changing any files, but only running a command.

    Note: The <missing> lines in the docker history output indicatethat those layers were built on another system and are not availablelocally. This can be ignored.

Copying makes containers efficient

When you start a container, a thin writable container layer is added on top ofthe other layers. Any changes the container makes to the filesystem are storedhere. Any files the container does not change do not get copied to this writablelayer. This means that the writable layer is as small as possible.

When an existing file in a container is modified, the storage driver performs acopy-on-write operation. The specifics steps involved depend on the specificstorage driver. For the aufs, overlay, and overlay2 drivers, the copy-on-write operation follows this rough sequence:

  • Search through the image layers for the file to update. The process startsat the newest layer and works down to the base layer one layer at a time.When results are found, they are added to a cache to speed future operations.

  • Perform a copy_up operation on the first copy of the file that is found, tocopy the file to the container’s writable layer.

  • Any modifications are made to this copy of the file, and the container cannotsee the read-only copy of the file that exists in the lower layer.

Btrfs, ZFS, and other drivers handle the copy-on-write differently. You canread more about the methods of these drivers later in their detaileddescriptions.

Containers that write a lot of data consume more space than containersthat do not. This is because most write operations consume new space in thecontainer’s thin writable top layer.

Note: for write-heavy applications, you should not store the data inthe container. Instead, use Docker volumes, which are independent of therunning container and are designed to be efficient for I/O. In addition,volumes can be shared among containers and do not increase the size of yourcontainer’s writable layer.

A copy_up operation can incur a noticeable performance overhead. This overheadis different depending on which storage driver is in use. Large files,lots of layers, and deep directory trees can make the impact more noticeable.This is mitigated by the fact that each copy_up operation only occurs the firsttime a given file is modified.

To verify the way that copy-on-write works, the following procedures spins up 5containers based on the acme/my-final-image:1.0 image we built earlier andexamines how much room they take up.

Note: This procedure doesn’t work on Docker Desktop for Mac or Docker Desktop for Windows.

  1. From a terminal on your Docker host, run the following docker run commands.The strings at the end are the IDs of each container.

  2. Run the docker ps command to verify the 5 containers are running.

  3. List the contents of the local storage area.

  4. Now check out their sizes:

    Each of these containers only takes up 32k of space on the filesystem.

Not only does copy-on-write save space, but it also reduces start-up time.When you start a container (or multiple containers from the same image), Dockeronly needs to create the thin writable container layer.

If Docker had to make an entire copy of the underlying image stack each time itstarted a new container, container start times and disk space used would besignificantly increased. This would be similar to the way that virtual machineswork, with one or more virtual disks per virtual machine.

Related information

container, storage, driver, AUFS, btfs, devicemapper, zvfs

Docker enables developers to deploy applications inside containers for testing code in an environment identical to production. IntelliJ IDEA provides Docker support using the Docker plugin. The plugin is bundled and enabled by default in IntelliJ IDEA Ultimate Edition. For IntelliJ IDEA Community Edition, you need to install the Docker plugin as described in Managing plugins.

Enable Docker support

  1. Install and run Docker.

    For more information, see the Docker documentation.

  2. Configure the Docker daemon connection settings.

    • In the Settings/Preferences dialog Ctrl+Alt+S, select Build, Execution, Deployment Docker.

    • Click to add a Docker configuration and specify how to connect to the Docker daemon.

      The connection settings depend on your Docker version and operating system. For more information, see Docker configuration.

      The Connection successful message should appear at the bottom of the dialog.

      The Path mappings table is used to map local folders to corresponding directories in the Docker virtual machine's file system. Only specified folders will be available for volume binding.

      This table is not available on Linux, because when running Docker on Linux, any folder is available for volume binding.

  3. Connect to the Docker daemon.

    The configured Docker connection should appear in the Services tool window (View Tool Windows Services or Alt+8). Select the Docker node , and click , or select Connect from the context menu.

    To edit the Docker connection settings, select the Docker node and click on the toolbar, or select Edit Configuration from the context menu.

The Services tool window (View Tool Windows Services or Alt+8) enables you to manage images, run containers, and use Docker Compose. As with other tool windows, you can start typing the name of an image or container to highlight the matching items.

Managing images

Docker images are executable packages for running containers. Depending on your development needs, you can use Docker for the following:

  • Pull pre-built images from a Docker registry

    For example, you can pull an image that runs a Postgres server container to test how your application will interact with your production database.

  • Build images locally from a Dockerfile

    For example, you can build an image that runs a container with the Java Runtime Environment (JRE) of some specific version to execute your Java application inside it.

  • Push your images to a Docker registry

    For example, if you want to demonstrate to someone how your application runs in some specific version of the JRE instead of setting up the proper environment, they can run a container from your image.

Images are distributed via the Docker registry. Docker Hub is the default public registry with all of the most common images: various Linux flavors, database management systems, web servers, runtimes, and so on. There are other public and private Docker registries, and you can also deploy your own registry server.

You do not need to configure a registry if you are going to use only Docker Hub.

  1. In the Settings/Preferences dialog Ctrl+Alt+S, select Build, Execution, Deployment Docker Registry.

  2. Click to add a Docker registry configuration and specify how to connect to the registry. If you specify the credentials, IntelliJ IDEA will automatically check the connection to the registry. The Connection successful message should appear at the bottom of the dialog.

  1. In the Services tool window, select the Images node and click .

  2. Select the Docker registry and specify the repository and tag (name and version of the image, for example, postgres:latest).

When you click OK, IntelliJ IDEA runs the docker pull command. For more information, see the docker pull command reference.

  1. Open the Dockerfile from which you want to build the image.

  2. Click in the gutter and select to build the image on a specific Docker node.

IntelliJ IDEA runs the docker build command. For more information, see the docker build command reference.

  1. In the Services tool window, select the image that you want to upload and click or select Push Image from the context menu.

  2. Select the Docker registry and specify the repository and tag (name and version of the image, for example, my-app:v2).

When you click OK, IntelliJ IDEA runs the docker push command. For more information, see the docker push command reference.

Images that you pull or build are stored locally and are listed in the Services tool window under Images. When you select an image, you can view its ID or copy it to the clipboard by clicking the button on the Properties tab.

Images with no tags <none>:<none> can be one of the following:

  • Intermediate images serve as layers for other images and do not take up any space.

  • Dangling images remain when you rebuild an image based on a newer version of another image. Dangling images should be pruned to conserve disk space.

To hide untagged images from the list, click on the Docker toolbar, and then click Show Untagged Images to remove the check mark.

To delete one or several images, select them in the list and click .

Running containers

Containers are runtime instances of corresponding images. For more information, see the docker run command reference.

IntelliJ IDEA uses run configurations (Run Edit Configurations) to run Docker containers. There are three types of Docker run configurations:

  • Docker Image: Created automatically when you run a container from an existing image. You can run it from a locally existing Docker image that you either pulled or built previously.

  • Dockerfile: Created automatically when you run a container from a Dockerfile. This configuration builds an image from the Dockerfile, and then derives a container from this image.

  • Docker-compose: Created automatically when you run a multi-container Docker application from a Docker Compose file.

Any Docker run configuration can also be created manually. From the main menu, select Run Edit Configurations. Then click , point to Docker, and select the desired type of run configuration.

  1. In the Services tool window, select an image and click or select Create Container from the context menu.

  2. In the Create container popup, click Create.

    If you already have a Docker run configuration for this image, the Create container popup will also contain the name of that run configuration as an option.

  3. In the Create Docker Configuration dialog that opens, you can provide a unique name for the configuration and specify a name for the container. If you leave the Container name field empty, Docker will give it a random unique name.

  4. When you are done, click Run to launch the new configuration.

  1. Open the Dockerfile from which you want to run the container.

  2. Click in the gutter and select to run the container on a specific Docker node.

This creates and starts a run configuration with default settings, which builds an image based on the Dockerfile and then runs a container based on this image.

To create a run configuration with custom settings, click in the gutter and select New Run Configuration. You can specify a custom tag for the built image, as well as a name for the container, and a context folder from which to read the Dockerfile. The context folder can be useful, for example, if you have some artifacts outside of the scope of your Dockerfile, which you would like to add to the file system of the image.

You can right-click the Dockerfile in the Project tool window for the following useful actions:

  • Run the container from the Dockerfile

  • Save the run configuration for the Dockerfile

  • Select the run configuration for this Dockerfile to make it active

Command-line options

When running a container on the command line, the following syntax is used:

All optional parameters can be specified in the corresponding Docker run configuration fields.

To open a run configuration, right-click a container and select Edit Configuration, or use the gutter icon menu in the Dockerfile, or select Run Edit Configurations from the main menu.

Options are specified in the Command Line Options field. In the above screenshot, the container is connected to the my-net network and is assigned an alias my-app.

Commands and arguments to be executed when starting the container are specified in the Entrypoint and Command fields. These fields override the corresponding ENTRYPOINT and CMD instructions in the Dockerfile.

In the previous screenshot, when the container starts, it executes the docker-entrypoint.sh script with postgres as an argument.

Not all docker run options are supported. If you would like to request support for some option, leave a comment in IDEA-181088.

The Command preview field shows the actual Docker command used for this run configuration.

You can also configure the following container settings in the run configuration:

Bind mounts

Docker can mount a file or directory from the host machine to the container using the -v or --volume option. You can configure this in the Docker run configuration using the Bind mounts field.

Make sure that the corresponding path mappings are configured in the Docker connection settings (the Path mappings table).

Click in the Bind mounts field and add bindings by specifying the host directory and the corresponding path in the container where it should be mounted. Select Read only if you want to disable writing to the container volume. For example, if you want to mount some local PostgreSQL data directory (/Users/Shared/pg-data ) to the PostgreSQL data directory inside the container (/var/lib/pgsql/data ), this can be configured as illustrated on the previous screenshot. Faq of ondesoft itunes drm media converter for mac.

If you expand the Command preview field, you will see that the following line was added:

-v /Users/Shared/pg-data:/var/lib/pgsql/data

This can be used in the Command Line Options field instead of creating the list of volume bindings using the Bind Mounts dialog.

View and modify volume bindings for a running container

  1. In the Services tool window, select the container and then select the Volume Bindings tab.

  2. To create a new binding, click . To edit an existing one, select the binding and click .

  3. Specify the settings as necessary and click Save to apply the changes.

The container is stopped and removed, and a new container is created with the specified changes. However, changes are not saved in the corresponding run configuration.

Bind ports

Docker can map specific ports on the host machine to ports in the container using the -p or --publish option. This can be used to make the container externally accessible. In the Docker run configuration, you can choose to expose all container ports to the host or use the Bind ports field to specify port mapping.

Click in the Bind ports field and bindings by specifying which ports on the host should be mapped to which ports in the container. You can also provide a specific host IP from which the port should be accessible (for example, you can set it to 127.0.0.1 to make it accessible only locally, or set it to 0.0.0.0 to open it for all computers in your network).

If you already have PostgreSQL running on the Docker host port %5432%, you can map port %5433% on the host to %5432% inside the container as illustrated on the previous screenshot. This will make PostgreSQL running inside the container accessible via port %5433% on the host.

If you expand the Command preview field, you will see that the following line was added:

This can be used in the Command Line Options field instead of creating the list of port bindings using the Port Bindings dialog.

View and modify port bindings for a running container

  1. In the Services tool window, select the container and then select the Port Bindings tab.

  2. To create a new binding, click . To edit an existing one, select the binding and click . If the Publish all ports checkbox is selected, clear it to be able to specify individual port mappings.

  3. Specify the settings as necessary and click Save to apply changes.

The container is stopped and removed, and a new container is created with the specified changes. However, changes are not saved in the corresponding run configuration.

Inspect

Environment variables

Environment variables are usually set in the Dockerfile associated with the base image that you are using. There are also environment variables that Docker sets automatically for each new container. You can specify additional variables and redefine the ones that Docker sets using the -e or --env option. In a Docker run configuration, you can use the Environment variables field to configure environment variables.

Click in the Environment variables field to add names and values for variables. For example, if you want to connect to PostgreSQL with a specific username by default (instead of the operating system name of the user running the application), you can define the PGUSER variable as illustrated on the previous screenshot.

If you expand the Command preview field, you will see that the following line was added:

This can be used in the Command Line Options field instead of creating the list of names and values using the Environment Variables dialog. If you need to pass sensitive information (passwords, secrets, and so on) as environment variables, you can use the --env-file option to specify a file with this information.

View and modify environment variables for a running container

  1. In the Services tool window, select the container and then select the Environment variables tab.

  2. To add a new variable, click . To edit an existing one, select the variable and click .

  3. Specify the settings as necessary and click Save to apply changes.

The container is stopped and removed, and a new container is created with the specified changes. However, changes are not saved in the corresponding run configuration.

Build-time arguments

Docker can define build-time values for certain environment variables that do not persist in the intermediate or final images using the --build-arg option for docker build. These must be specified in the ARG instruction of the Dockerfile with a default value. You can configure build-time arguments in the Docker run configuration using the Build args field.

For example, you can use build-time arguments to build the image with a specific version of PostgreSQL. To do this, add the ARG instruction to the beginning of your Dockerfile:

The PGTAG variable in this case will default to latest if you do not redefine it as a build-time argument. So by default, this Dockerfile will produce an image with the latest available PostgreSQL version. However, you can use the Build Args field to redefine the PGTAG variable.

In the previous screenshot, PGTAG is set to 9, which will instruct Docker to pull postgres:9. When you deploy this run configuration, it will build an image and run the container with PostgreSQL version 9 .

To check this, execute postgres --version inside the container and see the output: it should be postgres (PostgreSQL) 9.6.6 or some later version.

If you expand the Command preview field, you will see that the following option was added to the docker build command:

--build-arg PGTAG=9

Interacting with containers

Created containers are listed in the Services tool window. When you select a container, you can view its ID (and the ID of the corresponding image) and copy it to the clipboard using on the Properties tab. You can also specify a new name for the container and click Save to start another container with this new name from the same image.

By default, the Services tool window displays all containers, including those that are not running. To hide stopped containers from the list, click , and then click Show Stopped Containers to remove the checkbox.

If a container was created using a Docker run configuration, to view its deployment log, select it and open the Deploy log tab. To view the log messages from the container's STDOUT and STDERR, select it and open the Log tab. For more information, see the docker logs command reference.

You can browse the files inside a running container using the Files tab. Select any file and click to open it remotely in the editor or click to create a copy of the file as a scratch.

The file browser may not work by default for containers that don't have the full ls package, for example, images that are based on Alpine, Photon, and BusyBox. To use it, you can add the following command in the Dockerfile:

FROM photon:3.0 RUN echo y tdnf remove toybox
  1. In the Services tool window, right-click the container name and then click Exec.

  2. In the Run command in container popup, click Create.

  3. In the Exec dialog, type the command and click OK. For example:

    ls /tmp

    List the contents of the /tmp directory

    mkdir /tmp/my-new-dir

    Create the my-new-dir directory inside the /tmp directory

    /bin/bash

    Start a bash session

For more information, see the docker exec command reference.

View detailed information about a running container

  • In the Services tool window, right-click the container name and then click Inspect.

    The output is rendered as a JSON array on the Inspection tab.

For more information, see the docker inspect command reference.

  • In the Services tool window, right-click the container name and then click Show processes.

    The output is rendered as a JSON array on the Processes tab.

For more information, see the docker top command reference.

Attach a console to the output of an executable container

  • In the Services tool window, right-click the container and then click Attach.

    The console is attached to the output of the ENTRYPOINT process running inside a container, and is rendered on the Attached console tab.

For more information, see the docker attach command reference.

Docker Compose

Docker Compose is used to run multi-container applications. For example, you can run a web server, backend database, and your application code as separate services. Each service can be scaled by adding more containers if necessary. This enables you to perform efficient development and testing in a dynamic environment, similar to production.

Run a multi-container Docker application

  1. Define necessary services in one or several Docker Compose files.

  2. From the main menu, select Run Edit Configurations.

  3. Click , point to Docker and then click Docker-compose.

  4. Specify the Docker Compose files that define services which you want to run in containers. If necessary, you can restrict the services that this configuration will start, specify environment variables, and force building of images before starting corresponding containers (that is, add the --build option for the docker-compose up command).

  5. When the run configuration is ready, execute it.

To quickly create a Docker-compose run configuration and run it with default settings, right-click a Docker Compose file in the Project tool window and click Run in the context menu. You can also use gutter icons and the context menu in the Docker Compose file to control services.

When Docker Compose runs your multi-container application, you can use the Services tool window to control specific services and interact with containers. The containers are listed under the dedicated Compose nodes, not under the Containers node (which is only for standalone containers).

  1. In the Services tool window, select the service you want to scale and click or select Scale from the context menu.

  2. Specify how many containers you want for this service and click OK.

  • In the Services tool window, select the service and click or select Stop from the context menu.

  • In the Services tool window, select the Compose node and click .

  • In the Services tool window, select the Compose node and click .

This stops and removes containers along with all related networks, volumes, and images.

Open the Docker Compose file that was used to run the application

  • In the Services tool window, right-click the Compose node or a nested service node and then click Jump to Source in the context menu F4.

The Docker-compose run configuration will identify environment files with the .env suffix if they are located in the same directory as the Docker Compose file.

Docker debug

To debug your application running in a Docker container, you can use the remote debug configuration:

  1. In the main menu, select Run Edit Configurations.

  2. Click and select Remote.

  3. In the Before launch section, click and select Launch Docker before debug.

  4. Specify the Docker configuration you want to run and configure the preferred container port for the debugger to attach to if the default one is allocated to something else.

  5. Check the Custom Command and modify it if necessary.

  6. Apply all changes, remove any running containers of the application you want to debug, and then launch the remote debug configuration.

For examples, see the following tutorials:

Troubleshooting

If you encounter one of the following problems, try the corresponding suggested solution.

Make sure that:

  • Docker is running.

  • Your Docker connection settings are correct.

If you are using Docker for Windows, enable the Expose daemon on tcp://localhost:2375 without TLS option in the General section of your Docker settings.

If you are using Docker Toolbox, make sure that Docker Machine is running and its executable is specified correctly in the Settings/Preferences dialog Ctrl+Alt+S under Build, Execution, Deployment Docker Tools.

Make sure that the Docker Compose executable is specified correctly in the Settings/Preferences dialog Ctrl+Alt+S under Build, Execution, Deployment Docker Tools.

Make sure that the corresponding container ports are exposed. Use the EXPOSE command in your Dockerfile.

Unable to associate existing Dockerfiles or Docker Compose files with relevant types

When you create new Dockerfiles or Docker compose files, IntelliJ IDEA automatically identifies their type. If a file type is not evident from its name, you will be prompted to select the file type manually. To associate an existing file with the correct type, right-click it in the Project view and select Associate with File Type from the context menu.

If the Associate with File Type actions is disabled, this probably means that the filename is registered as a pattern for current file type. For example, if you have a Dockerfile with a custom name that is recognized as a text file, you cannot associate it with the Dockerfile type. To remove the file type pattern, do the following:

  1. In the Settings/Preferences dialog Ctrl+Alt+S, select Editor File types.

  2. Select the relevant file type (in this case: Text) and remove the pattern with the name of the file.

  3. Click OK to apply the changes.

Now you should be able to set the correct file type using Associate with File Type in the context menu.

Limitations

The Docker integration plugin has certain limitations and bugs, however JetBrains is constantly working on fixes and improvements for it. You can find the list of Docker issues in our bug tracking system and vote for the ones that affect you the most. You can also file your own bugs and feature requests.