25 Most Useful Ubuntu Server Commands

 

Ubuntu Server is a popular Linux distribution that is used by businesses and individuals alike. It is a powerful and versatile operating system that can be used for a variety of tasks, from hosting websites to running databases.

One of the strengths of Ubuntu Server is its command-line interface (CLI). The CLI is a powerful tool that can be used to perform a wide range of tasks, from basic file management to complex system administration.

In this blog post, we will discuss the top 25 most useful Ubuntu Server commands. These commands will cover a variety of tasks, so you should be able to find what you need, regardless of your experience level.

The 25 Most Useful Ubuntu Server Commands

  1. ls: List the contents of a directory.

    • Example: ls
  2. cd: Change the current directory.

    • Example: cd /etc
  3. mkdir: Create a new directory.

    • Example: mkdir my_directory
  4. touch: Create an empty file.

    • Example: touch my_file
  5. cp: Copy a file or directory.

    • Example: cp my_file /tmp
  6. mv: Move or rename a file or directory.

    • Example: mv my_file /home/user
  7. rm: Remove a file or directory.

    • Example: rm my_file
  8. nano: Edit a text file.

    • Example: nano my_file
  9. cat: Read the contents of a file.

    • Example: cat my_file
  10. grep: Search for a text pattern in a file.

    • Example: grep "hello" my_file
  11. chmod: Change the permissions of a file or directory.

    • Example: chmod 400 my_file
  12. chown: Change the ownership of a file or directory.

    • Example: chown user my_file
  13. tar: Create or extract a tar archive file.

    • Example: tar -cf my_archive.tar my_directory
  14. zip: Create a zip file.

    • Example: zip my_archive.zip my_directory
  15. unzip: Extract a zip file.

    • Example: unzip my_archive.zip
  16. ssh: Connect to a remote server.

    • Example: ssh user@server
  17. sudo: Run a command with root privileges.

    • Example: sudo apt install my_package
  18. apt: Install or remove packages from the Ubuntu repositories.

    • Example: apt install my_package
  19. apt-cache: Search for packages in the Ubuntu repositories.

    • Example: apt-cache search my_package
  20. dpkg: Manage packages that are installed on the system.

    • Example: dpkg -l
  21. ps: List all running processes.

    • Example: ps
  22. top: Monitor the system resources.

    • Example: top
  23. kill: Kill a running process.

    • Example: kill 1234
  24. df: Display the disk space usage.

    • Example: df
  25. du: Display the file space usage.

    • Example: du -h /

YouTube's Experiment with Ad Blockers: Urging Viewers to Choose Ads or Premium

 
YouTube is implementing a significant change to video playback for users who utilize ad blockers. In a statement to The Verge, the company confirmed that it is conducting a global experiment that urges viewers with ad blockers enabled to either permit ads on YouTube or consider subscribing to YouTube Premium.

This development follows the emergence of a new prompt that warns users about potential disruptions in video playback if YouTube detects repeated usage of ad blocking tools. Android Authority previously reported on these tests, which restrict viewers from watching more than three videos when an ad blocker is active.

According to Oluwa Falodun, a Google spokesperson, ad blocker detection is not a new practice, as other publishers frequently request viewers to disable ad blockers. YouTube emphasizes that it will issue multiple notifications encouraging users to discontinue the use of such tools or, alternatively, subscribe to YouTube Premium, before any interruptions to their viewing experience occur.

“We take disabling playback very seriously and will only do so if viewers disregard repeated requests to allow ads on YouTube," Falodun stated in an email to The Verge. "If viewers believe they have been incorrectly identified as using an ad blocker, they can provide feedback by clicking on the link in the prompt."

These measures indicate that YouTube is adopting a stricter stance against ad blockers, citing the importance of ad revenue for compensating creators and maintaining a free platform. "YouTube’s ad-supported model supports a diverse ecosystem of creators and provides billions of people worldwide with access to free content through ads," the company's statement explained.

Over the years, YouTube has tested users' patience by experimenting with heavier ad loads. In one of its previous experiments, the company served up to ten unskippable clips within a single ad break. Additionally, in May, YouTube announced the introduction of 30-second ads on TV platforms.

YouTube Premium, priced at $11.99 per month or $119.99 annually, offers an ad-free experience along with other benefits like offline downloads and YouTube Music Premium. Last November, the company announced that it had surpassed 80 million combined subscribers across YouTube Premium and YouTube Music. Thus, while protecting creators' earnings serves as a commendable rationale, YouTube also has a vested interest in steering more users towards its recurring monthly subscription.

"We aim to inform viewers that ad blockers violate YouTube's Terms of Service and make it easier for them to enable ads on YouTube or try YouTube Premium for an ad-free experience," the company stated in its email to The Verge.

Intel Advances Quantum Computing with New Test Chip

 
Intel has announced a new quantum test chip that marks a significant milestone in the company's efforts to build a commercial quantum computer. The new chip, called Tunnel Falls, has 12 qubits, which are the basic units of quantum information. This is the most qubits ever integrated on a single chip by Intel, and it represents a major step forward in the company's efforts to scale up its quantum computing capabilities.

The Tunnel Falls chip is based on Intel's silicon spin qubit technology, which uses the spin of electrons to store quantum information. This technology is well-suited for large-scale quantum computers because it is relatively easy to manufacture and it can be integrated with existing semiconductor manufacturing processes.

In addition to the 12 qubits, the Tunnel Falls chip also includes a number of other features that are important for building a commercial quantum computer. These features include a cryogenic control chip that helps to keep the qubits at a very low temperature, and a readout circuit that allows the qubits to be measured.

Intel is making the Tunnel Falls chip available to the quantum research community, which will help to accelerate the development of quantum computing applications. The company is also working on developing a full-stack quantum computing system, which would include the quantum chip, a classical computer to control the chip, and software to run quantum algorithms.

Intel's progress in quantum computing is significant, and it is one of several companies that are working to develop a commercial quantum computer. If successful, quantum computers could revolutionize a wide range of industries, including finance, healthcare, and materials science.

Here are some of the potential applications of quantum computers:

  • Finance: Quantum computers could be used to develop new financial products and services, and to improve the efficiency of existing financial markets.
  • Healthcare: Quantum computers could be used to develop new drugs and treatments, and to improve the accuracy of medical diagnostics.
  • Materials science: Quantum computers could be used to design new materials with improved properties, such as strength, conductivity, and lightness.
The development of quantum computers is still in its early stages, but the potential benefits are enormous. Intel's latest test chip is a significant step forward, and it shows that the company is committed to making quantum computing a reality.

Enhancing Network Security: Dedicated Firewall vs. Built-in Computer Firewall

 


Network security is a critical aspect of protecting your data and ensuring the integrity of your network. While using the built-in firewall on your computer is a step in the right direction, it may not provide the level of protection and control needed for a comprehensive network security strategy. In this blog post, we will explore the advantages of using a dedicated firewall device, such as pfSense, over relying solely on the built-in firewall on your computer.

Network-wide Protection:
A dedicated firewall device operates at the network level, safeguarding all devices within the network. By implementing a dedicated firewall, such as pfSense, you gain centralized control and monitoring capabilities, enabling consistent security policies across your entire network. On the other hand, the built-in firewall on a computer protects only that specific device, leaving other devices vulnerable.

Advanced Features and Customization:
Dedicated firewall devices offer a wide range of advanced features and customization options beyond what a built-in firewall provides. For example, pfSense supports intrusion detection/prevention systems (IDS/IPS), VPN capabilities, content filtering, and more. These features allow for granular control over network traffic, empowering you to tailor security measures to meet your specific requirements.

Segmentation and Network Isolation:
One of the key advantages of a dedicated firewall device is the ability to create network segments or VLANs (Virtual Local Area Networks), which facilitate network isolation. By separating different parts of your network, you can enhance security by restricting access between segments and contain potential threats or compromises. Built-in firewalls lack this segmentation capability.

Performance and Scalability:
Dedicated firewall devices are designed to handle heavy network traffic efficiently. Unlike built-in firewalls, which may have performance limitations, devices like pfSense are equipped with robust hardware optimized for processing network traffic at higher data throughput. They also offer scalability options to accommodate growing networks and increased traffic demands.

Centralized Management and Logging:
Dedicated firewall devices provide centralized management interfaces and logging capabilities, simplifying administration and improving visibility into network traffic and security events. With pfSense, for instance, you can configure and monitor firewall settings across your entire network from a single interface, ensuring consistent security policies and easing management efforts.

Conclusion:
While utilizing the built-in firewall on your computer provides a basic level of protection, a dedicated firewall device like pfSense offers significant advantages for network security. With features such as network-wide protection, advanced customization options, network segmentation, improved performance, scalability, and centralized management and logging, dedicated firewalls ensure comprehensive network security and enhanced control over network traffic.

To truly fortify your network and protect your valuable data, investing in a dedicated firewall device like pfSense is a wise choice. By implementing such a solution, you can proactively safeguard your network against threats, enforce security policies consistently, and gain peace of mind knowing that your network is fortified against potential vulnerabilities.

Remember, network security is a complex topic, and it's always recommended to consult with professionals or network security experts to ensure you have the most appropriate and effective security measures in place.

Understanding RAID Configurations: Enhancing Data Storage and Performance

 
Efficient data storage and reliable access are essential for businesses and individuals alike. RAID (Redundant Array of Independent Disks) configurations provide a solution by combining multiple physical hard drives into a logical unit with various levels of data redundancy, performance, or both. In this post, we will explore the most commonly used RAID configurations, their benefits, and how they can improve your data storage infrastructure.

RAID 0 (Striping): RAID 0, known as striping, is a configuration that focuses on performance and storage capacity. It distributes data across multiple drives, allowing for parallel read and write operations. While RAID 0 enhances performance by leveraging multiple drives, it lacks redundancy. A single drive failure can result in data loss, making it crucial to maintain regular backups.

RAID 1 (Mirroring): RAID 1, or mirroring, prioritizes data redundancy and fault tolerance. It duplicates data across two or more drives, ensuring that if one drive fails, the mirrored drive(s) can seamlessly take over. RAID 1 provides enhanced data integrity and improved read performance, but at the cost of reduced storage capacity due to data duplication.

RAID 5 (Striping with Parity): RAID 5 combines striping and distributed parity to achieve a balance between performance, capacity, and fault tolerance. It stripes data and calculates parity information, distributing it across multiple drives. RAID 5 can tolerate the failure of a single drive without data loss, as the parity information enables reconstruction of the missing data.

RAID 6 (Striping with Double Parity): Similar to RAID 5, RAID 6 also utilizes striping and parity, but with double parity for higher fault tolerance. It can withstand the failure of two drives simultaneously without data loss. While RAID 6 offers superior data protection, it has reduced write performance due to the overhead of calculating double parity.

RAID 10 (RAID 1+0): RAID 10 combines mirroring (RAID 1) and striping (RAID 0) to provide both performance and fault tolerance. Data is striped across mirrored pairs of drives, offering improved read and write performance, along with the ability to tolerate multiple drive failures depending on the configuration.

RAID 50 (RAID 5+0): RAID 50 combines striping and distributed parity, similar to RAID 5, but across multiple RAID 5 arrays. By striping data across multiple RAID 5 sets, RAID 50 enhances performance and fault tolerance, making it suitable for applications that demand both high performance and data protection.

RAID 60 (RAID 6+0): RAID 60 combines striping with double parity, similar to RAID 6, but across multiple RAID 6 arrays. It offers increased performance and higher fault tolerance compared to RAID 50, making it an ideal choice for systems that require robust data protection and optimal performance.

Understanding RAID configurations is vital for optimizing data storage and ensuring reliable access to critical information. By leveraging RAID, organizations and individuals can achieve a balance between performance, storage capacity, and data redundancy. Whether you prioritize speed, fault tolerance, or a combination of both, there is a RAID configuration that suits your specific needs. As you design your data storage infrastructure, consider the advantages and trade-offs offered by each RAID configuration to maximize the reliability and efficiency of your system.

 

The Hidden Potential of Old PCs: Transforming Them into Servers and More


In today's fast-paced world, technology is evolving rapidly, leaving behind a trail of outdated devices. Among these relics are old PCs, once considered cutting-edge, now relegated to the corner of our attics or gathering dust in storage. However, what many fail to realize is that these seemingly obsolete machines possess untapped potential. In this blog post, we'll explore the usefulness of repurposing old PCs, particularly as servers, and shed light on the various benefits they can offer.

  1. Affordable Server Solutions:
    Servers are essential for hosting websites, running applications, and managing data. However, dedicated server hardware can be expensive. Repurposing an old PC as a server provides a cost-effective alternative. With minimal investment, you can convert your outdated PC into a functional server capable of handling tasks like file storage, media streaming, or even hosting a personal website.

  2. Learning and Experimentation:
    For technology enthusiasts, repurposing an old PC as a server opens up a world of learning opportunities. It allows you to delve into the realms of networking, server administration, and software configuration. By repurposing an old PC, you can experiment with different operating systems, set up virtual machines, or explore advanced server applications. This hands-on experience can be immensely valuable for students, aspiring IT professionals, or hobbyists looking to expand their knowledge.

  3. Home Media Server:
    One of the most popular uses for an old PC is transforming it into a home media server. By installing media server software, such as Plex or Emby, you can centralize your multimedia collection and stream it to various devices within your home network. Whether it's movies, music, or photos, an old PC can act as a hub for all your entertainment needs, turning your living room into a personalized media center.

  4. Network-Attached Storage (NAS):
    Another practical application for repurposed PCs is creating a Network-Attached Storage (NAS) solution. By adding additional hard drives and configuring the PC with NAS software like FreeNAS or OpenMediaVault, you can transform it into a storage powerhouse. This setup allows you to store and access files from anywhere on your network, providing a convenient backup solution and file-sharing capabilities.

  5. Dedicated Game Server:
    For gamers, repurposing an old PC as a dedicated game server can enhance their multiplayer gaming experience. Many games support player-hosted servers, enabling you to create a custom gaming environment for you and your friends. Whether it's Minecraft, Counter-Strike, or a myriad of other games, an old PC can serve as the backbone for hosting these servers, ensuring low-latency and personalized gameplay.

  6. Sustainable Computing: In an era when environmental concerns are paramount, repurposing old PCs contributes to sustainable computing practices. By extending the lifespan of these devices, we reduce electronic waste and minimize the strain on natural resources required to manufacture new hardware. Repurposing not only benefits our pockets but also promotes ecological responsibility by reducing the carbon footprint associated with the disposal and production of electronics.

The usefulness of old PCs extends far beyond their intended lifespan. By repurposing them as servers, media centers, or dedicated game servers, we unlock their hidden potential and breathe new life into these once-powerful machines. The affordability, learning opportunities, and practical applications offered by repurposed PCs make them invaluable assets in our ever-evolving technological landscape. So, before you discard that outdated PC, consider the possibilities that lie within and embrace the potential of repurposing.

Internet vs. Intranet: Understanding the Difference

 


In the digital age, connectivity plays a vital role in our daily lives. We rely on computer networks to communicate, access information, and collaborate with others. Two commonly used networks are the internet and intranet, each serving distinct purposes. In this blog post, we will explore the key differences between the internet and intranet, shedding light on their unique functionalities and the advantages they offer.

Internet: Connecting the World
The internet is a vast global network that connects millions of devices, networks, and users worldwide. It is a public network, accessible to anyone with an internet connection. With its ubiquitous presence, the internet enables us to access a wealth of information, connect with people across the globe, and engage in various online activities. From browsing websites and social media platforms to sending emails and streaming media, the internet has revolutionized the way we live, work, and communicate.

Intranet: Secure Collaboration within Organizations
While the internet opens up a world of possibilities, organizations often require a private and secure network for their internal operations. Enter the intranet, a private network limited to a specific organization or a defined group of users. Unlike the internet, access to an intranet is restricted to authorized personnel within the organization. It serves as a secure platform for internal communication, collaboration, and information sharing.

Enhancing Internal Communication and Collaboration
Intranets are designed to facilitate seamless internal communication within an organization. They offer features such as company-wide news updates, discussion forums, and messaging systems, enabling employees to stay connected and informed. In addition, intranets provide a centralized platform for sharing documents, files, and resources, fostering collaboration and teamwork. From project management tools to employee directories, intranets streamline internal processes and improve overall efficiency.

Securing Sensitive Information
One of the primary advantages of an intranet is the enhanced security it provides. By keeping the network restricted to authorized users, organizations can safeguard sensitive data and protect confidential information. Intranets employ various security measures such as user authentication, data encryption, and access controls to ensure that only approved individuals can access and interact with the network. This level of control and security is crucial for organizations dealing with proprietary information, customer data, and intellectual property.

Tailored for Organizational Needs
Unlike the internet, which is a standardized and globally accessible network, intranets can be customized to meet specific organizational requirements. Organizations can design their intranets to reflect their brand identity, integrate with existing systems and tools, and tailor functionalities to their unique workflows. This flexibility allows organizations to create an intranet environment that aligns with their specific needs, enhancing productivity and collaboration within the company.

The internet and intranet are distinct networks, each serving a specific purpose in the digital landscape. The internet connects the world, offering access to vast amounts of information and enabling global communication. On the other hand, intranets provide organizations with a secure and private network for internal communication, collaboration, and information sharing. By understanding the differences between these two networks, organizations can leverage the strengths of both to maximize their productivity, security, and efficiency in today's interconnected world.

 

How To Set Up Your Own VPN Service FOR FREE with OpenVPN and Docker

 


In an increasingly connected world, privacy and security have become paramount. Setting up your own Virtual Private Network (VPN) service can provide an added layer of protection for your online activities. In this step-by-step guide, we will walk you through the process of creating your own VPN service using OpenVPN and Docker. By following these instructions, you'll have full control over your VPN, ensuring enhanced privacy and security.

Before getting started, ensure that you have the following:

  • A server or virtual machine with Docker installed
  • Basic command-line knowledge
  • A public IP address or domain name for your server 

**IMPORTANT** You will need to Port Forward 1194 to your server.

Step 1: Install Docker
Begin by installing Docker on your server. Visit the official Docker website (https://www.docker.com/products/docker-desktop) and download the appropriate version for your operating system. Follow the installation instructions to complete the setup.

Step 2: Create a Docker Network
Open a terminal or command prompt and create a Docker network that will be used for the VPN connections. Enter the following command:
docker network create vpn-net

Step 3: Create an OpenVPN Configuration Directory
Create a directory on your server where you will store the OpenVPN configuration files. For example, let's create a directory called "openvpn-config":
mkdir openvpn-config

Step 4: Generate OpenVPN Server Configuration Files
Use the following command to generate the OpenVPN server configuration files:
docker run -v $PWD/openvpn-config:/etc/openvpn --log-driver=none --rm -it kylemanna/openvpn ovpn_genconfig -u udp://YOUR_SERVER_IP

Replace 'YOUR_SERVER_IP' with the public IP address or domain name of your server.

Step 5: Initialize the OpenVPN Certificate Authority (CA)
Run the command below to initialize the OpenVPN certificate authority:
docker run -v $PWD/openvpn-config:/etc/openvpn --log-driver=none --rm -it kylemanna/openvpn ovpn_initpki

You will be prompted to enter a passphrase for the CA key. Choose a strong passphrase and remember it securely.

Step 6: Start the OpenVPN Server Container
To start the OpenVPN server container, execute the following command:
docker run -v $PWD/openvpn-config:/etc/openvpn -d -p 1194:1194/udp --cap-add=NET_ADMIN --restart=always --name=openvpn-server --net=vpn-net kylemanna/openvpn
We will now need to open port 1194 on our server's firewall:
ufw allow 1194

Step 7: Generate Client Configuration Files
Generate client configuration files for each VPN client by executing the following commands:
docker run -v $PWD/openvpn-config:/etc/openvpn --log-driver=none --rm -it kylemanna/openvpn easyrsa build-client-full CLIENT_NAME nopass

docker run -v $PWD/openvpn-config:/etc/openvpn --log-driver=none --rm kylemanna/openvpn ovpn_getclient CLIENT_NAME > CLIENT_NAME.ovpn

Replace 'CLIENT_NAME' with a unique identifier for each client.

Step 8: Transfer Client Configuration Files
Securely transfer the generated client configuration files (CLIENT_NAME.ovpn) to the respective client devices using methods like secure file transfer (SCP) or encrypted email.

Step 9: Connect to the VPN
Install an OpenVPN client application, such as OpenVPN GUI for Windows or Tunnelblick for macOS, on the client device. Import the client configuration file (CLIENT_NAME.ovpn) into the client application and connect to the VPN.

By following these simple steps, you have successfully created your own VPN service using OpenVPN and Docker.

 


How To Easily Make Your Very Own QR Code Generator With Python

 


Here's a step-by-step tutorial on how to easily make a QR code generator using Python!

Step 1: Install Required Libraries
First, make sure you have the necessary libraries installed. Open your terminal or command prompt and run the following commands to install pyqrcode and tkinter:

$ sudo pip install pyqrcode
$ sudo pip install tkinter
$ sudo pip install pypng

Step 2: Open up your text editor (I like Nano) and begin coding
Start by importing the required libraries in your Python script:

import pyqrcode
import tkinter as tk
from tkinter import filedialog


Step 3: Define the QR Code Generation Function 
Next, define the generate_qr() function. This function will be called when the "Generate" button is clicked. Inside this function, we'll generate the QR code based on the user input and save it as a PNG file:

def generate_qr():
    qr = pyqrcode.create(entry.get())
    filename = filedialog.asksaveasfilename(defaultextension='.png')
    qr.png(filename, scale=8)
    window.destroy()

In this function, we first create a QR code using pyqrcode.create(). The data for the QR code is obtained from the entry widget using entry.get(). Next, we open a file dialog using filedialog.asksaveasfilename() to let the user choose the location and name of the PNG file to save the QR code. Finally, we save the QR code as a PNG file using qr.png() with a specified scale of 8, and then close the GUI window using window.destroy().

Step 4: Create the GUI Window 
Create the main GUI window using the tkinter library:

window = tk.Tk()
window.title('QR Code Generator')

We set the window's title to "QR Code Generator".

Step 5: Add GUI Widgets 
Add the necessary GUI widgets to the window, including a label, an entry field, and a button:

label = tk.Label(window, text='Enter data:')
label.pack()
entry = tk.Entry(window)
entry.pack()
button = tk.Button(window, text='Generate', command=generate_qr)
button.pack()

We create a label widget to display the text "Enter data:" and pack it into the window. Then, we create an entry widget to allow the user to enter the data for the QR code and pack it as well. Finally, we create a button widget with the label "Generate" and associate the generate_qr() function with the button's command parameter.

Step 6: Run the GUI Loop 
Run the main GUI loop using window.mainloop():

window.mainloop()

This line of code will start the GUI event loop and keep the window displayed until it is closed.

Step 7: Run the Code 
Save your script with a .py extension (e.g., qr_code_generator.py) and run it using Python. The GUI window will appear, allowing you to enter the data for the QR code. After entering the data, click the "Generate" button. A file dialog will open where you can choose the location and name of the PNG file to save the QR code. Once you select the location and provide a file name, the QR code will be generated and saved as a PNG file. The GUI window will then close. (Note: Code must be run as sudo or root)

That's it! You've successfully created a QR code generator using Python and tkinter. Below is the code in it's entirety. Enjoy!

import pyqrcode
import tkinter as tk
from tkinter import filedialog

def generate_qr():
    qr = pyqrcode.create(entry.get())
    filename = filedialog.asksaveasfilename(defaultextension='.png')
    qr.png(filename, scale=8)
    window.destroy()

window = tk.Tk()
window.title('QR Code Generator')

label = tk.Label(window, text='Enter data:')
label.pack()

entry = tk.Entry(window)
entry.pack()

button = tk.Button(window, text='Generate', command=generate_qr)
button.pack()

window.mainloop()

Step-by-Step Guide: How To Install Nextcloud Using Docker in Linux


Nextcloud is a popular open-source file-sharing platform that can be self-hosted on your own server. Docker is a containerization technology that makes it easy to deploy and manage Nextcloud in a container. In this tutorial, we'll walk through the process of installing Nextcloud usint the Apache Docker image, opening the correct UFW ports, and configuring the trusted domains in the config.php file for use with a public IP.

Prerequisites

Before we begin, ensure that you have the following:
    
    - A Debian or Ubuntu based distribution of Linux
   
    - Docker installed
   
    - UFW (Uncomplicated Firewall) installed and enabled
 

Install Nextcloud Using Docker

To install Nextcloud using the Apache Docker image, we'll use the official Nextcloud Docker image available on Docker Hub. (Note: You must be Root or Sudo to use Docker commands.) Here are the steps:
 
    1. Pull the official Nextcloud image from Docker Hub:
        $ docker pull nextcloud 
 
    2. Start the container with the following command:
        $ docker run -d -p 8080:80 nextcloud
This command starts the Nextcloud container and maps port 8080 on the host to    port 80 in the container. You should now be able to access docker at http://localhost:8080/ from your host system.
 
    3. Verify that the container is running:
        $ docker ps

You have now successfully installed Nextcloud using the Apache Docker image. Note your container ID number and name in the output as I will be referring to it as <container-ID> going forward. You will still need to open the following ports to be able to access NextCloud from other devices other than your host system or to access it from your host system using your public IP address.
 

Opening UFW Ports

To allow incoming traffic to your Nextcloud instance, we need to open the correct ports in UFW. Here are the steps:
 
    1. Allow incoming HTTP traffic:
        $ ufw allow 8080
     $ ufw allow 80
     $ ufw allow 443
 
   2. Reload UFW:
        $ ufw reload
 

Opening Bash in the Docker Container

To open a Bash shell inside the Docker container, use the following command:
        $ docker exec -it <container-id> bash

Updating the Container

Before you can edit the config.php file, you need to update the container to be able to install nano (a text editor). Use the following command to update the container:
        $ apt-get update && apt-get install nano

Editing the config.php File

To configure Nextcloud to use a public IP address, we need to add the public IP address to the config.php file. (Note: the config file won't be available until after you open Docker for the first time and setup your Admin account.) Here are the steps:
        $ cd /var/www/html/config/
     $ nano config.php

Add the public IP address to the trusted_domains array:
'trusted_domains' => 
array
    0 => 'localhost:8080',
    1 => 'your_public_ip_address', ),
Note: Be sure to replace your_public_ip_address with your own public IP address.

Save and exit the file.

Congratulations! You have now successfully configured Nextcloud to use
a public IP address. You can access your Nextcloud instance by navigating to http://your_public_ip_address in your web browser.