coding

Most commonly used Linux Commands and their descriptions

In this topic, I have to show the most used Linux commands and their usages. and at the end of the blog post, I will give you a script that will create the environment to practice these commands.

If you’re new to Linux, it can be overwhelming to navigate through all the commands and tools available. However, mastering the basics can go a long way in making your Linux experience smooth and efficient. In this blog post, we’ll go through some of the essential Linux commands and their usage.

Before we get started, it’s essential to note that Linux is a command-line operating system. Unlike graphical interfaces such as Windows or macOS, Linux is entirely based on typed commands. As a result, it can seem daunting for those who are used to graphical interfaces, but it’s nothing to worry about. In fact, many developers prefer the command-line interface because of its speed and versatility.

Here are some of the basic Linux commands that every Linux user should know:

CommandDescription
pwdPrint working directory
lsList directory contents
cdChange directory
mkdirMake directory
rmRemove files or directories
cpCopy files and directories
mvMove or rename files and directories
nanoSimple text editor
catPrint file contents
grepSearch file contents
chmodChange file or directory permissions

pwd – Print working directory

The pwd command is used to print the current working directory. It’s a handy command when you’re unsure of the directory you’re currently in. To use it, simply type pwd and hit enter. The output will be the full path of the current working directory.

ls – List directory contents

The ls command lists the contents of the current directory. It’s an essential command when navigating through the file system. Typing ls in the terminal will display a list of all the files and directories in the current directory.

cd – Change directory

The cd command is used to change the current directory. To use it, simply type cd followed by the directory name you want to change to. For example, cd Documents will change the current directory to the Documents folder.

mkdir – Make directory

The mkdir command is used to create a new directory. To use it, type mkdir followed by the name of the directory you want to create. For example, mkdir MyFolder will create a new directory named MyFolder.

rm – Remove files or directories

The rm command is used to remove files or directories. It’s a powerful command, so make sure you’re using it with caution. To remove a file, simply type rm followed by the file name. For example, rm myfile.txt will delete a file named myfile.txt. At the same time, to remove a directory, you need to add the -r option, which stands for recursive. For example, rm -r MyFolder will delete the directory named MyFolder and all its contents.

cp – Copy files and directories

The cp command is used to copy files and directories. To use it, type cp followed by the source file or directory and the destination file or directory. For example, cp myfile.txt MyFolder will copy the file named myfile.txt to the directory named MyFolder.

mv – Move or rename files and directories

The mv command is used to move or rename files and directories. To use it, type mv followed by the source file or directory and the destination file or directory. For example, `mv myfile.txt

Now obviously, when working you are not going to take a look at these notes or online every time you need to copy or move. Also, at the same time you need to practice it first so that you can learn the usage by hand. That is why, I have created a shell script that will create multiple folders and files inside those folders. To use that, simply copy the code and save that in a file. Make sure the extension of the file is .sh so the file name would look like generator.sh. Then in that folder run the command ./generator.sh` . The shell script will create 3 different folders and inside the folder, there will be some text files.

Here is the shell script. I will add some task below the shell script so that you can practice on your own.

#!/bin/bash

# Create Fruits, Animal, and Continent directories
mkdir Fruits
mkdir Animal
mkdir Continent

# Create 3 text files in the Fruits directory
echo "Creating text files in the Fruits directory..."
touch Fruits/mango.txt
echo "Mango is a juicy and sweet tropical fruit." > Fruits/mango.txt
touch Fruits/orange.txt
echo "Oranges are citrus fruits that are high in vitamin C." > Fruits/orange.txt
touch Fruits/banana.txt
echo "Bananas are a popular fruit that are rich in potassium." > Fruits/banana.txt

# Create 5 text files in the Animal directory
echo "Creating text files in the Animal directory..."
touch Animal/lion.txt
echo "Lions are large carnivorous cats that live in groups called prides." > Animal/lion.txt
touch Animal/elephant.txt
echo "Elephants are the largest land animals on Earth and have a remarkable memory." > Animal/elephant.txt
touch Animal/giraffe.txt
echo "Giraffes are the tallest mammals in the world, with long necks and legs." > Animal/giraffe.txt
touch Animal/zebra.txt
echo "Zebras are herbivorous animals with black and white stripes." > Animal/zebra.txt
touch Animal/hippo.txt
echo "Hippos are semi-aquatic mammals that are known for their aggressive behavior." > Animal/hippo.txt

# Create 7 text files in the Continent directory
echo "Creating text files in the Continent directory..."
touch Continent/asia.txt
echo "Asia is the largest continent in the world and is home to many diverse cultures." > Continent/asia.txt
touch Continent/africa.txt
echo "Africa is the second-largest continent in the world and is known for its wildlife and natural resources." > Continent/africa.txt
touch Continent/europe.txt
echo "Europe is a continent that is rich in history and has many famous landmarks." > Continent/europe.txt
touch Continent/north_america.txt
echo "North America is the third-largest continent in the world and is home to many different countries and cultures." > Continent/north_america.txt
touch Continent/south_america.txt
echo "South America is a continent that is known for its vibrant cultures, music, and food." > Continent/south_america.txt
touch Continent/australia.txt
echo "Australia is the smallest continent in the world and is known for its unique wildlife, such as kangaroos and koalas." > Continent/australia.txt
touch Continent/antarctica.txt
echo "Antarctica is the coldest continent in the world and is mostly covered in ice." > Continent/antarctica.txt

echo "Folders and files created successfully!"

Practice Questions

Level: Easy

  • Use cd to go into the folder fruits
  • Use pwd to print the directory
  • Show the list of files in that fruits folder (use ls)
  • Create a new folder inside the fruits folder called backup_files (use mkdir)
  • Copy banana.txt file inside the backup_files directory
  • Move the mango.txt file inside the backup_files directory
  • remove banana.txt file from the fruits directory.

Level: Intermediate

  • go to animal directory and list out all the files including their permission level
    • hintUse the ls -a to see the permission level
  • Create a directory called backup and copy all the files in that directory
  • remove zebra.txt from the backup directory only

Level: hard

  • Go to continent directory and list out all the files including their permission level
  • create a folder called africa
  • move the africa.txt file inside the africa
  • copy the files elephant.txt hippo.txt and zebra.txt into the africa folder
  • create another folder asia inside the continent folder.
  • copy mango.txt and lion.txt to the asia folder.

That’s it for today. I will talk about the nano cat grep and chmod in a separate post. If you are here till now, please like and share my post with others. Thank you.

Most commonly used Linux Commands and their descriptions Read More »

Unlocking the Power of Nginx Configuration for Web Serving

nGinx is a very powerful webserver which we can configure to our needs. Over the time, in various situations, I have used nGinx to act as reverse proxy, load balancer etc. I have tried to create a note of those configuration so that I can take a look at them whenever I need them. Here are these configurations

Reverse Proxy Configuration

A reverse proxy server is a server that sits in front of web servers and directs client requests to the appropriate web server. This configuration can be used to improve the security, scalability, and reliability of web applications.

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

we have defined an upstream block that defines two servers for load balancing. We have also defined a server block that listens on port 80 and directs traffic to the backend servers using the proxy_pass directive.

SSL/TLS Configuration

SSL/TLS is a protocol that encrypts data sent over the internet. This configuration can be used to improve the security of web applications. Here we have defined a server block that listens on port 443 with SSL/TLS enabled. We have also defined the SSL certificate and key files using the ssl_certificate and ssl_certificate_key directives

server {
    listen 443 ssl;
    server_name example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Caching Configuration

Caching is a technique that stores frequently accessed data in memory to improve performance. This configuration can be used to improve the performance of web applications.

http {
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m;

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_cache my_cache;
            proxy_cache_valid 200 60m;
            proxy_pass http://localhost:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

In this configuration, we have defined a cache path and zone using the proxy_cache_path directive. We have also defined a server block that uses the proxy_cache directive to cache frequently accessed data.

Securing nGinx

Rate Limiting

Rate limiting is a technique that limits the number of requests that can be made within a certain period of time. This can be used to prevent abuse and protect against DDoS attacks.

http {
    limit_req_zone $binary_remote_addr zone=my_zone:10m rate=1r/s;

    server {
        listen 80;
        server_name example.com;

        location / {
            limit_req zone=my_zone;
            proxy_pass http://localhost:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

here, we have defined a limit_req_zone using the $binary_remote_addr variable to track requests from each IP address. We have also defined a server block that uses the limit_req directive to limit requests to 1 request per second.

IP Blocking

IP blocking is a technique that blocks access from specific IP addresses. This can be used to prevent access from known attackers or malicious users. Here, we have defined a deny directive to block access from the IP address 192.168.0.1. We have also defined a server block that allows access to all other IP addresses.

http {
    deny 192.168.0.1;

    server {
        listen 80;
        server_name example.com;

        location / {
            allow all;
            proxy_pass http://localhost:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Request Filtering

Request filtering is a technique that blocks requests that match certain patterns. This can be used to prevent access to sensitive files or prevent SQL injection attacks.

http {
    server {
        listen 80;
        server_name example.com;

        location / {
            if ($request_uri ~* "(.*/)?\.git(/.*)?$") {
                return 403;
            }
            if ($query_string ~* "union.*select.*\(") {
                return 403;
            }
            proxy_pass http://localhost:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

In this configuration, we have defined two if statements to block requests that match certain patterns. The first if statement blocks requests that contain the “.git” string in the request URI, which can prevent access to sensitive Git repository files. The second if statement blocks requests that contain the “union select” string in the query string, which can prevent SQL injection attacks.

Creating a Load Balancer

To create a load balancer using nginx, we can use the upstream block and the proxy_pass directive as shown in the load balancer configuration above. We can define multiple servers in the upstream block and nginx will distribute incoming traffic across them.

upstream backend {
    server 192.168.0.10;
    server 192.168.0.11;
}

In this upstream block, we have defined two servers with IP addresses 192.168.0.10 and 192.168.0.11. We can then use the proxy_pass directive in the server block to direct traffic to the upstream servers.

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

In this server block, we have defined the listen directive to port 80 and set the server_name directive to example.com. We have also defined a location block that uses the proxy_pass directive to direct traffic to the upstream servers defined in the backend upstream block.

Unlocking the Power of Nginx Configuration for Web Serving Read More »

Featured image for the post the power of refactoring

The Power of Refactoring: Transforming a Dashboard for Improved Load Time

In this post I am going to share an experience about refactoring a project that decreased the load time of a website.

Brief History

A few years ago, I was working on a project where I was tasked to improve the performance of a certain dashboard. Before I tell you the problem with the dashboard, let me tell you what it looked like. Mainly it was a mash-up of multiple charts to show some data points. When the page loaded, it used to show a loading screen and in the background, it used to make one single request to fetch those data. The backend would then query the database couple of times to generate around seven different data points (yes, because that’s how many charts were there). Once all the querying is completed it would return back the results to the front. It used to take around 10-12 seconds before we could see anything on the website. To put the cherry on top, it was the first page for an admin after the login. So even if the user wants to go to a different page for different work they had to wait before the entire page loads.

Although I am using the word frontend, it is not like a detached frontend. It’s a legacy application where each page was served by the server. However, in a few parts, the project was using JQuery to fetch a few data from API. (I did not know why some parts of the application use APIs and on the other hand some serve HTTP pages… I was not there when they made this decision).

Do we make it look faster or actually faster?

People used to complain about this dashboard a lot especially the slowness of that specific page and something had to be done. Eventually, we reached a conclusion to make that part faster!! Now here is the thing, you can either make things faster or you can make people feel that the site is getting faster. And we needed both. This requires a lot of refactoring.

A frontend framework for the reactive dashboard

To further improve the performance of the dashboard, we decided to use Vue.js to render the dashboard. Using Vue.js, we can take advantage of its reactive components to render the page quickly and provide an improved user experience. Additionally, Vue.js allows us to use virtual DOM, which helps to reduce the time taken to render the page. This allows us to update the page faster, providing users with a more responsive experience. Furthermore, Vue.js provides a number of features that can be used to further improve the performance of the dashboard, such as server-side rendering and code-splitting.

By using Vue.js to render the dashboard, we can ensure that the page is rendered quickly and efficiently, giving users a better experience. Now a lot of you might say, we could’ve used React or even Svelt instead of VueJS, but at that time, some of our devs knew vueJS a little bit more than any other frameworks. Which is why we choose that. Plus their way of making any page a vue app just by adding the CDN link was extra beneficial. That’s why we choose that.

Image about the power of refactoring for faster dashboard

the power of refactoring for faster dashboard

Splitting APIs

To further enhance the performance of the dashboard, we have employed a strategy of splitting the data into multiple APIs as opposed to serving it all together. This approach has allowed us to retrieve only the data that is necessary for the user, thus reducing the load time of the page. Moreover, the division of the data into multiple APIs has enabled us to reduce the processing time of the data, again leading to improved performance of the page. At the same time, we have been able to take advantage of caching, which allows us to store the data and serve it quickly, thereby reducing the time taken to render the page. In this way, we are able to ensure that users have access to the data in a rapid and efficient manner.

Change in UI to make it faster

The first approach was to serve an empty dashboard page when the user was logged into the site. We decided to use vueJS to fetch the data. Since we implemented 7 new API endpoints to get seven different data points, we made 7 parallel API requests. While waiting for a response, we only rendered 7 boxes with loading indicators. Whenever one of the data is returned through the API, the vueJS would update the graph and show it in the dashboard. Using this approach may have increased the number of network calls but we did not have to wait for the whole data to be fetched from the backend.

Since the empty dashboard was loaded at first, if the user wants to go to a different page they can go to that page from the navbar. These changes made the UI a little more responsive than before. Once that is done we moved to the next stage, which is using caching to serve data faster.

Implement caching to make response faster

Another strategy we have adopted to further improve the performance of the dashboard is to use Redis to cache the results. This means that the data is stored in Redis, which is a fast and lightweight database so that it can be retrieved quickly. Since Redis is in-memory data storage, it is able to store a large amount of data without consuming too much time, thus making it ideal for this purpose. Furthermore, since Redis is highly scalable, it is able to handle large volumes of data, making it an ideal solution for caching search results.

Redis is an open-source, in-memory data structure store that is used to cache data for faster retrieval. It helps to reduce the load time of search results by storing the result in memory, allowing users to access the data quickly. We decided to use Redis to cache the search results in order to improve the performance and reduce the time it took to query the database. Using Redis helped us to reduce the load on the database, saving time and resources.

In our case, the data in the dashboard did not have to be real-time updated. Instead, the requirement was like, as long as the data in the dashboard does not show any data that is more than 30 minutes old, they are ok with it. This made our work a little bit easier. We configured the redis in such a way so that the data expires within 30 minutes.

Here is an example code snippet using Node.js to set a value in Redis with an expiry of 30 minutes:

const redis = require('redis');
const client = redis.createClient();

client.set('myKey', 'myValue', 'EX', 1800, (err, reply) => {
  if (err) throw err;
  console.log(reply);
});

client.quit();

In this example, myKey is the key that we want to set in Redis and myValue is the value that we want to associate with the key. The third argument 'EX' indicates that we want to set an expiry time for the key, and the fourth argument 1800 sets the expiry time to 30 minutes (since the expiry time is measured in seconds).

The set method also takes a callback function that is executed when the operation is complete. If an error occurs, the error is thrown; otherwise, the reply from Redis is logged into the console.

Finally, we close the Redis client connection using client.quit().

So every time the frontend calls the APIs, we would check if the redis has any data about that, if there is we would just return that data otherwise, we would query the database and once we get the data, we would save the data in the redis with 30 min expiry and return that data as API response.

Conclusion

It’s not that hard to make the service faster, it was just a lot of rewrites. That is why I always tell people to think, is there a better way to complete a feature, before jumping into implementing a feature? Even if it may take more time right now to make it right, if we do not fix that, it will only consume more time to fix that later on.

Thank you for reading! What topic would you like me to write about next? Let me know and I’ll do my best to create a helpful blog post for you.

The Power of Refactoring: Transforming a Dashboard for Improved Load Time Read More »