Monday, December 30, 2024

Port forwarding issues and port collusion by more then one process

Port forwarding script to identify collusions 


Identifying local and remote port forwarding processes, as well as checking for potential collisions when multiple processes are accessing the same port, is critical for troubleshooting networking issues. In Linux, tools like netstatss, and lsof can help in detecting port usage, and by combining them with process information, you can build a script to identify the relevant details.

Goals of the Script:

  1. Identify Local and Remote Port Forwarding: We'll identify which processes are involved in local and remote port forwarding (e.g., SSH tunnels).
  2. Check for Port Collisions: We'll ensure that multiple processes are not accessing the same port and causing conflicts.

Steps to Implement the Script:

  1. Use ss to identify active port connections.
  2. Use lsof to associate ports with specific processes.
  3. Identify SSH tunneling processes by checking for connections on forwarded ports.
  4. Detect multiple processes using the same port to avoid collisions.

Key Concepts:

  • Local Port Forwarding: Forwarding from a local machine to a remote machine through a specific port.
  • Remote Port Forwarding: Forwarding from a remote machine to a local machine via a specific port.
  • Port Collisions: Two or more processes trying to bind to the same local or remote port, leading to resource conflicts.

Example Script: port_forwarding_detector.sh

This script will detect both local and remote port forwarding, list processes accessing specific ports, and check for collisions.

#!/bin/bash

# Function to display error message and exit
error_exit() {
    echo "$1" 1>&2
    exit 1
}

# Check if the script is running as root (needed for `lsof` to check other users' processes)
if [[ $(id -u) -ne 0 ]]; then
    echo "Warning: The script is not running as root. Some details might be incomplete."
fi

echo "Detecting local and remote port forwarding, and checking for port collisions..."

# Detect local port forwarding by SSH tunneling (this checks for SSH processes with -L or -R options)
echo "### Detecting Local and Remote Port Forwarding via SSH Tunnels ###"
ps aux | grep '[s]sh' | grep -E '(-L|-R)' | while read -r line; do
    pid=$(echo $line | awk '{print $2}')
    command=$(echo $line | awk '{print $11, $12, $13}')
    user=$(echo $line | awk '{print $1}')
    echo "SSH Port Forwarding Process Detected:"
    echo "  PID: $pid"
    echo "  Command: $command"
    echo "  User: $user"
    echo "-----------------------------------------"
done

# Use `ss` to show listening ports and associated processes
echo "### Identifying Processes Binding to Ports ###"
ss -tuln | awk 'NR > 1 {print $5, $7}' | while read -r line; do
    # Extracting IP and Port information
    port=$(echo $line | awk -F: '{print $NF}')
    pid=$(echo $line | awk '{print $2}' | cut -d',' -f1)
    process_info=$(ps -p $pid -o comm=)
    echo "Port $port is being used by process $process_info (PID: $pid)"
done

# Use lsof to detect the processes accessing the same ports
echo "### Checking for Multiple Processes Accessing the Same Port ###"
lsof -i -P -n | awk '{print $9, $2, $1}' | sed '1d' | sort | uniq -c | while read count port pid process; do
    if [ "$count" -gt 1 ]; then
        echo "Warning: Port $port is being used by multiple processes ($count instances)"
        ps -fp $pid | awk '{print "PID: "$1, "Command: "$8, "User: "$1}'
    fi
done

# Additional check for any process with the same port in multiple directions
echo "### Checking for Collisions in Local and Remote Port Forwarding ###"
ss -tuln | awk 'NR > 1 {print $5, $7}' | while read -r line; do
    local_port=$(echo $line | awk -F: '{print $NF}')
    pid=$(echo $line | awk '{print $2}' | cut -d',' -f1)
    process_info=$(ps -p $pid -o comm=)
    
    # Check if local port is being forwarded and used by a remote host
    if [[ "$process_info" == *"ssh"* ]]; then
        echo "Detected possible SSH local port forwarding on port $local_port (PID: $pid)"
    fi
done

echo "Port forwarding detection complete. Monitoring for potential issues..."

How This Script Works:

  1. Detecting Local and Remote Port Forwarding:

    • The script uses the ps command to detect SSH processes that are running with the -L (local forwarding) or -R(remote forwarding) options. It filters out lines containing SSH port forwarding arguments and provides detailed information (PID, command, and user).
  2. Identify Processes Binding to Ports:

    • It uses ss -tuln to list all listening ports (-tuln option lists TCP/UDP ports in listening state). It associates each port with a PID and process.
  3. Check for Collisions:

    • The lsof command is used to list processes accessing network ports (lsof -i -P -n).
    • The script counts how many processes are using the same port and flags potential collisions (multiple processes accessing the same port).
  4. Collisions for Local and Remote Port Forwarding:

    • The script checks if the same port is being used both locally and remotely, which can indicate potential conflicts or overlapping port forwarding settings.

Example Output:

Detecting local and remote port forwarding, and checking for port collisions...

### Detecting Local and Remote Port Forwarding via SSH Tunnels ###
SSH Port Forwarding Process Detected:
  PID: 2345
  Command: ssh -L 8080:localhost:80 user@remotehost
  User: user
-----------------------------------------
SSH Port Forwarding Process Detected:
  PID: 2346
  Command: ssh -R 9090:localhost:90 user@remotehost
  User: user
-----------------------------------------

### Identifying Processes Binding to Ports ###
Port 8080 is being used by process ssh (PID: 2345)
Port 9090 is being used by process ssh (PID: 2346)

### Checking for Multiple Processes Accessing the Same Port ###
Port 8080 is being used by multiple processes (2 instances)
PID: 2345 Command: ssh User: user
PID: 2347 Command: apache2 User: root
-----------------------------------------
Port 9090 is being used by multiple processes (2 instances)
PID: 2346 Command: ssh User: user
PID: 2348 Command: apache2 User: root

### Checking for Collisions in Local and Remote Port Forwarding ###
Detected possible SSH local port forwarding on port 8080 (PID: 2345)
Detected possible SSH remote port forwarding on port 9090 (PID: 2346)

Key Points:

  • Local and Remote Port Forwarding: This is detected through SSH processes with -L (local forwarding) and -R(remote forwarding) options.
  • Port Collisions: The script flags when multiple processes are accessing the same port, which can lead to port conflicts.
  • Detailed Process Information: For each port collision, the script provides detailed process information, such as the PID, command, and user.

Conclusion:

This script can be useful for detecting local and remote port forwarding configurations, identifying potential port collisions, and providing detailed information on which processes are binding to which ports. By regularly running this script, you can proactively manage port usage and avoid issues caused by port conflicts in your system. 




File Leak in linux

 File Leak analysis per process 


Identifying file descriptor leaks on Linux can be tricky, but it's important to monitor the number of file descriptors (FDs) a process is using, especially when you're troubleshooting resource exhaustion or system performance issues. File descriptor leaks occur when a process opens files (or other resources like sockets, pipes, etc.) but fails to close them, eventually leading to resource exhaustion.

Here's a script that can help you identify file descriptor leaks by monitoring processes, checking their open file descriptors, and tracking how many are open over time. We'll also go over common causes of file descriptor leaks.

Key Concepts:

  • File Descriptors (FDs): These are resources that processes use to interact with files, sockets, etc. Each process is limited by the number of FDs it can open, typically set by the ulimit command.
  • FD Leak: A file descriptor leak occurs when a process opens a file or socket and doesn't properly close it, leading to resource exhaustion.
  • Monitoring: We’ll monitor the open file descriptors over time and check for unusual growth.

Script to Identify File Descriptor Leaks

This script monitors open file descriptors for each process over time. It can help you identify processes with growing file descriptor counts (potential FD leaks).

Script: fd_leak_detector.sh

#!/bin/bash

# Check if the user is root (required for reading file descriptors of other users)
if [[ $(id -u) -ne 0 ]]; then
    echo "You must run this script as root to access other users' processes' file descriptors."
    exit 1
fi

# Temp file to store process information
TMP_FILE=$(mktemp)

# Number of seconds to sleep between checks
SLEEP_INTERVAL=10
# Number of checks to perform (you can increase this value)
NUM_CHECKS=6

echo "Monitoring file descriptors for leaks. Checking every $SLEEP_INTERVAL seconds..."

# Initial snapshot of file descriptors count
for i in $(seq 1 $NUM_CHECKS); do
    echo "Snapshot $i: $(date)" >> $TMP_FILE
    # Loop through all process IDs
    for pid in /proc/[0-9]*; do
        # Check if the process has a valid fd directory
        if [ -d "$pid/fd" ]; then
            # Get the number of open file descriptors for the process
            fd_count=$(ls -1 $pid/fd | wc -l)
            process_name=$(ps -p $(basename $pid) -o comm=)
            # Output PID, process name, and open FD count to temporary file
            echo "$(basename $pid)  $process_name  $fd_count" >> $TMP_FILE
        fi
    done

    # Sleep for the specified interval before next check
    sleep $SLEEP_INTERVAL
done

# Analyze the results and identify processes with growing FD counts
echo "Analyzing file descriptor growth over time..."

# Sort the data and show processes with increasing FD count
awk '{
    count[$1][$2][$3] += 1;
} END {
    for (pid in count) {
        for (proc in count[pid]) {
            for (fd_count in count[pid][proc]) {
                if (count[pid][proc][fd_count] > 2) {
                    print "Potential FD leak detected! Process: " proc " with PID: " pid " opened " fd_count " file descriptors over time.";
                }
            }
        }
    }
}' $TMP_FILE

# Cleanup
rm -f $TMP_FILE

How This Script Works:

  1. Root Privileges: The script checks if it's being run as root because it needs permission to access other processes' /proc/[PID]/fd directories.

  2. Snapshot Collection: The script takes snapshots of the number of file descriptors open for each process over multiple intervals (controlled by SLEEP_INTERVAL and NUM_CHECKS). The file descriptor count is obtained by counting the entries in /proc/[PID]/fd.

  3. Analysis: After collecting the data, the script looks for processes whose file descriptor counts grow significantly over time, which could indicate a file descriptor leak.

  4. Output: The script will display processes where the number of file descriptors grows over time, indicating potential leaks.

Example Output:

Monitoring file descriptors for leaks. Checking every 10 seconds...

Snapshot 1: Mon Dec 30 11:20:02 UTC 2024
1234  mysqld  45
5678  nginx  12
Snapshot 2: Mon Dec 30 11:20:12 UTC 2024
1234  mysqld  50
5678  nginx  15
Snapshot 3: Mon Dec 30 11:20:22 UTC 2024
1234  mysqld  56
5678  nginx  20
...
Analyzing file descriptor growth over time...
Potential FD leak detected! Process: mysqld with PID: 1234 opened 56 file descriptors over time.

How to Identify Each Process File Descriptor Leak Growth

The above script detects file descriptor growth over time. If the FD count increases without being released (i.e., the process keeps opening more file descriptors without closing them), this is indicative of a potential FD leak.

You can use the following additional strategies to troubleshoot and confirm the leak:

Common Causes of File Descriptor Leaks

  1. Improperly Closed Sockets or Files: If a process opens sockets or files but does not close them properly after use, this will lead to a leak.

  2. Faulty Application Code: In custom applications, improper error handling can lead to a failure to close file descriptors when exceptions or errors occur.

  3. Libraries or Daemons: Some libraries or daemons (such as database servers or network services) may not handle file descriptors efficiently under high load.

  4. Improper Handling of Network Connections: Network servers (e.g., web servers, database servers) may fail to close sockets correctly under heavy traffic, leading to FD leaks.

To Diagnose the Cause:

  1. Check Application Logs: Review the logs for any errors or warning messages related to resource exhaustion or socket failures.

  2. Use strace: If you suspect a particular process, use strace to trace system calls and watch for open() and close() calls. For example:

    strace -e trace=open,close -p <PID>
    
  3. Check for Abnormally High FD Usage: Processes with an unusually high FD count should be investigated further. Use tools like lsof to list open files for these processes.

    lsof -p <PID>
    
  4. Limit Resource Usage: Consider temporarily setting resource limits (e.g., ulimit -n for open files) to prevent FD leaks from crashing the system.

    ulimit -n 10000  # Set max open files to 10,000
    

To Fix FD Leaks:

  • Code Fixes: In application code, ensure that files, sockets, or pipes are always closed after use, even in error conditions. Using RAII (Resource Acquisition Is Initialization) or try/finally blocks in languages like Python or Java can help ensure this.

  • Use Resource Management Tools: Many modern frameworks and libraries handle resource cleanup for you, but older code or custom applications might require manual intervention.

Conclusion

This script and the methods described will help you identify processes with file descriptor leaks by tracking the growth of open file descriptors over time. The root cause of these leaks is often due to improper resource management in code, but monitoring and early detection can significantly improve system stability.

Monday, December 9, 2024

memory tools

 https://github.com/0voice/kernel_memory_management/blob/main/%E2%9C%8D%20%E6%96%87%E7%AB%A0/5%20useful%20tools%20to%20detect%20memory%20leaks%20with%20examples.md

https://www.yugabyte.com/blog/linux-performance-tuning-memory-disk-io/


Found from chatgpt:

If you're looking to identify which processes are using the most cache memory on a Linux system, you can use a script that parses the /proc/[PID]/smaps or /proc/meminfo to give detailed information about memory usage, including cached memory for individual processes. Here's an approach that can help you identify which processes are consuming the most cache memory.

Script to Analyze Memory Cache Usage by Process

This script will loop through all processes, check their memory usage, and summarize how much memory is cached for each process.

Steps:

  1. Use /proc/[PID]/smaps to identify memory usage for each process.
  2. Look for KernelPageSizePssPrivate_CleanPrivate_Dirty, and Shared_Clean.
  3. Sum the memory usage per process and list the top offenders.

Script: cache_usage_by_process.sh

#!/bin/bash

# Check if the user is root (required for reading /proc/[PID]/smaps)
if [[ $(id -u) -ne 0 ]]; then
    echo "You must run this script as root to access /proc/[PID]/smaps"
    exit 1
fi

# Temporary file to store results
TMP_FILE=$(mktemp)

# Header
echo "PID   Process Name            Cache (KB)" > $TMP_FILE
echo "----------------------------------------" >> $TMP_FILE

# Loop through each process in /proc
for pid in /proc/[0-9]*; do
    # Check if smaps file exists for the process
    if [ -f "$pid/smaps" ]; then
        # Extract process name
        process_name=$(ps -p $(basename $pid) -o comm=)

        # Extract cache memory from smaps file
        # `Private_Clean` + `Private_Dirty` + `Shared_Clean` will give an estimate of cache usage
        cache_usage=$(grep -E 'Private_Clean|Private_Dirty|Shared_Clean' $pid/smaps | awk '{sum+=$2} END {print sum}')

        # If cache_usage is not empty, output the data
        if [ -n "$cache_usage" ]; then
            echo "$(basename $pid)   $process_name   $cache_usage KB" >> $TMP_FILE
        fi
    fi
done

# Sort the results by cache usage and display the top offenders
echo "----------------------------------------"
echo "Top Processes Using Cached Memory:"
sort -k3 -n -r $TMP_FILE | head -n 20

# Cleanup
rm -f $TMP_FILE

How This Script Works:

  1. Check for Root Privileges: The script needs root privileges because /proc/[PID]/smaps can only be accessed by the root user.
  2. Loop through Processes: The script loops through all directories in /proc (which correspond to process IDs).
  3. Check for smaps File: It checks if the smaps file exists for the process (this file contains detailed memory info).
  4. Extract Memory Data: It extracts the cache-related memory stats from the smaps file. Specifically, it looks for Private_CleanPrivate_Dirty, and Shared_Clean memory.
  5. Summing Memory: The script adds up these values to calculate the total cached memory for each process.
  6. Sorting and Display: After gathering the data, it sorts the results by cache usage and shows the top processes.

Example Output:

The output will display the processes consuming the most cache memory, for example:

PID   Process Name            Cache (KB)
----------------------------------------
1234  firefox                4520 KB
5678  mysqld                 10240 KB
2345  apache2                3580 KB
----------------------------------------
Top Processes Using Cached Memory:
5678  mysqld                 10240 KB
1234  firefox                4520 KB
2345  apache2                3580 KB
...

Explanation of Key Memory Fields:

  • Private_Clean: Memory that is clean (i.e., has not been modified) and is private to the process.
  • Private_Dirty: Memory that has been modified and is private to the process.
  • Shared_Clean: Memory that is clean and shared with other processes.
  • Shared_Dirty: Memory that has been modified and is shared with other processes.

Notes:

  • Cached Memory: The cache used by a process in this script is a sum of the values in Private_CleanPrivate_Dirty, and Shared_Clean. These values give an idea of how much memory is used by the page cache that could potentially be freed up.
  • Performance Consideration: This script can be slow if there are a lot of processes, as it reads the smaps file for each process.
  • Permissions: You'll need to run this script with root privileges to access the /proc/[PID]/smaps of other processes.

Running the Script:

  1. Save the script to a file, for example cache_usage_by_process.sh.
  2. Make the script executable:
    chmod +x cache_usage_by_process.sh
    
  3. Run the script as root:
    sudo ./cache_usage_by_process.sh
    

This will show you which processes are using the most cached memory on your system. You can modify the script to include other memory details as needed.




Thursday, May 30, 2024

Python - ML,Datascience

 There are various libraries used for different use 


For Advanced Data analysis 

-  NumPy

- SciPy

- pandas


Data Visualization

- matplotlib

- Seaborn


Machine Learning

-scikit-learn

-TensorFlow

-keras

Sunday, January 23, 2022

Distributed System Design

 Distributed System: Its a mechanism having multiple components are located in different networked systems and communicate and coordinate the each other using message events exchange with one another to achieve goals.


Distributed system will provide 3 major benefits: 1. Scalability 2. High Performance and 3. High Availability


When we design teh distributed system we assume Fallacies But in real time those never we met

Example: 

  1. The network is reliable; we don't know hardware failure or any other software failures due to lack of system resources 
  2. Latency is zero : We cannot say latency always zero.
  3. Bandwidth is infinite : https://grpc.io/, https://thrift.apache.org/ and REST apis libs will use for it but still no guarantee...
  4. The network is secure : Devices distributes across may not have secure communication.
  5. Topology doesn't change: Network topology may change when new requirements comes
  6. There is one administrator: One admin can manage better 
  7. Transport cost is zero : T. Cost may increate when we communicate  b/w different geo located devices
  8. The network is homogeneous : Same network may not possible all the times when geo- distributed


Global clock i.e network Asynchronous, Partial failures like out of 6 nodes 1 node may fail etc, and Parallel or multiprocessing i.e concurrency on same resource will make distributes system design considerations to be addressed.

Safety vs liveness .. some times we make sure safety is more important then livenes(availability)s of a solution 


When we distribute message or events between nodes we should think it can be Synchronous or asynchronous.
In real time asynchronous way only we interact, and keepi in mind of failures in nodes like one node may crash,  halt the system or not able to send or receive messages and send unappropriate messages in the conteext due to failures in memory or data.


Thursday, August 27, 2020

System Design Concepts

 

When any distributed system development  basic components needs to be consider are Scalability, Efficiency, Reliability, Manageability and Availability.

Scalability means system can be scaled in processing, network grow or memory capabilities increased or number of systems may increase as per demand.

Horizontal Scaling: Adding more systems/devices, it can be changed dynamically without reboot or interrupting current service.

Vertical Scaling: Adding more power like adding CPU's, RAM, Memory to existing system/devices, It may required to reboot or stop the service for upgrade time.



Sunday, June 28, 2020

Software development process

Waterfall model

   It will be used when there are clear requirements and fixed scope of a project.

   1. Collect & Analyze requirements
        clarify with stakeholders, detailed documented thoroughly
  2. Architecture definition
        - its blue print of a product.
        - which packages and components will form our system
       - what are fundamental types of each component
       - How the interaction with each component
       - is software secure, performance, error cases handled  and robust.
       - Is system design modular for future extension
       - Any third party components will be used, how is there licensing agreement.
  3. Impelementation
       - coding
       - unit testing
  4. Verification
       - All requirements implemented based on requirement agreement.
       - Functional testing
       - Performance validation
       - Security
       - User friendliness.
 5. Maintanence phase
       - Fixing customer bugs, enhancement etc.

 Agile Framework

It will be used when the requirement are unstable and may change frequently.
1. Scrum
2. Kanban
3. Test driven development(TDD) 

Thursday, June 25, 2020

gdb with release binary

debugging stripped binary by using disasamble the assmbly code

no symbols binary debig

Write below code into test.c
//----------------------------------
#include<stdio.h>

void fun(int x)
{
        int a = 10;
        printf("%d\n", a+x);
}
int main()
{
        int x = 5;
        fun(5);
        return 0;
}
//-----------------------------
-> compile with
#gcc -O3 test.c -o test

-> Look for a symbols using nm command
# nm test
0000000000201010 B __bss_start
0000000000201010 b completed.7698
                 w __cxa_finalize@@GLIBC_2.2.5
0000000000201000 D __data_start
0000000000201000 W data_start
00000000000005c0 t deregister_tm_clones
0000000000000650 t __do_global_dtors_aux
0000000000200dc0 t __do_global_dtors_aux_fini_array_entry
0000000000201008 D __dso_handle
0000000000200dc8 d _DYNAMIC
0000000000201010 D _edata
0000000000201018 B _end
0000000000000734 T _fini
0000000000000690 t frame_dummy
0000000000200db8 t __frame_dummy_init_array_entry
00000000000008a4 r __FRAME_END__
00000000000006a0 T fun
0000000000200fb8 d _GLOBAL_OFFSET_TABLE_
                 w __gmon_start__
0000000000000748 r __GNU_EH_FRAME_HDR
0000000000000510 T _init
0000000000200dc0 t __init_array_end
0000000000200db8 t __init_array_start
0000000000000740 R _IO_stdin_used
                 w _ITM_deregisterTMCloneTable
                 w _ITM_registerTMCloneTable
0000000000000730 T __libc_csu_fini
00000000000006c0 T __libc_csu_init
                 U __libc_start_main@@GLIBC_2.2.5
0000000000000560 T main
                 U __printf_chk@@GLIBC_2.3.4
0000000000000600 t register_tm_clones
0000000000000590 T _start
0000000000201010 D __TMC_END__


-> Remove symbols using strip -s
# strip -s test
-> Check for symbols
#nm test
nm: test: no symbols

-> run gdb for test
 #gdb test
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from test...(no debugging symbols found)...done.
(gdb) b main
Function "main" not defined.

Lets start how to debug

-> Locate entry point with 'nfo file'
    (gdb) info file
Symbols from "/home/bkotha/tt/test".
Local exec file:
        `/home/bkotha/tt/test', file type elf64-x86-64.
        Entry point: 0x590
        ----
(gdb)

-> now set the break point at Entry point address
(gdb) b 0x590
   Breakpoint 1 (0x590)

-> then do the disasamble code
(gdb)disas


GDB

To debug a C or C++ application with gdb, built in debug mode i.e -g compiling option.
gdb commands 


-g in compilation: will enable debug symbols into binary.

Ex: gcc -g mytest.c  or gcc -g  test.c -o test
      g++ -g mytest.cpp or  g++ -g test.cpp -o test
1. start gdb with executable and set the break point and run  :
    #gdb test
    (gdb) b main # to set the break point at main function start
     (gdb) run  # program start executes 
2. start gdb without process and add process with 'file'.
     #gdb
     (gdb) file test # start the test executable for debugging 

3. Passing console arguments to process with run 
     (gdb) run  abc 3 # abc , 3 are command line arguments for test program.  

Back trace / stack frames : using bt
4. On break point / core parser gdb running out we can see current stack trace
       (gdb)bt
      It will list stack frames with frame id for each stack frame.
4.1  list each frame with f num
     (gdb) f 4 # here 4 is 4th stack frame listed in bt

Break Points: b , break, br any of this keywords can be used to set the break points
5.  Adding break point with filename with line number
     (gdb) b test.c:10  # break point at line number 10
6. Adding breakpoint with function names
      (gdb) break fun1   #fun1 is function in test.c
     (gdb) b myclass::fun2   #fun2 is myclass member function
6.1  break with memory address
       (gdb)b *(memoryAddress)  # debug symbol address instead variable or function.
7. List all break points
      (gdb) info break  # list all break points with break point numbers in sequence.
8. Delete/Remove breakpoint using d
      (gdb)d 3  #here 3 is the 3rd break point
9. traverse to next step or next break point &  step in  using n, c & s
      (gdb) s
      (gdb) n
      (gdb) c
10.  Print the values with p or print keyword
      (gdb) print iVar   #iVar is the variable
      (gdb) p fVal    #fVal is a variable
11. To set the values for variables using 'set'
       (gdb)set iVar=20
       (gdb)p iVar  # iVar will print 20
12. Execute the global functions after breakpoint hit
       (gdb)p myglobalFun  #myglobalFun is a global function can be run here
13. Listing the source code after some line number or function content  using 'list'
      (gdb)list 10
      (gdb)list myglobalFun
14.  Print the current debug code line numbers using 'frame'
      (gdb)frame
15. Quit from debug console qith q or quit key word
      (gdb)quit

x Command: ' x/FMT Address '
16. x command list the region of memory with specified address content in specified format.
     (gdb)x/100i $sp   # $sp is current stack pointer address, 100i - denotes the 100 address spaces code with disassemble code.
     (gdb)x/20s 0x435281720   # 20 lines code from address with code in string format
      (gdb)x/x  display in hexadecimal
      (gdb)x/d  display in decimal format
      (gdb)x/c display with charectors format
  help for x u can check in gdb
        (gdb) help x

Multithreading debug:
In multi threading  debugging mostly thread will share code, data segment from process space only thread specific it will be having stack content, so needs to debug only stack data.
17. To list the threads info using 'info threads'
     (gdb)info threads   # which will list all threads with thread id
18. To check all threads stack traces ' thread apply all bt '
      (gdb) thread apply all bt
19.  To check particular thread stack trace first set the treat using 'thread <tread id>' listed with 'info threads' the do bt for stack trace of that thread
      (gdb)thread 0x12463097   #0x12463097 thread id 
      (gdb) bt 


debug for release/ stripped / no symbols binary using gdb

20. With gdb will do 'info file', it lists Entry point address for program. set the break point at entry point address,  then do the 'disas' will list assembly code will be having main function address.

with assembly code try to under stand or match ur function names and data to under stand the flow.
more info click on link debug for release/ stripped / no symbols binary using gdb with more links in it



Thursday, May 14, 2020

interview question

Preparation practice and planning resource:
https://github.com/jwasham/coding-interview-university



1. How alexa project works basic flow to and from AWS cloud.
2. Sockte programming api's, what all api's are blocked calls. how multiple clients server will handle his point to know is select, but my knowledge that day is epoll.
3. Thread syntax, i asked for posix or c++11 threads.. he said any i went with c++11 which is easy i hope he was not happy as c++11 threads syntax is easy.
4. Pipes, named pipes, messageQueues what all differences and how each is advantage over other.
5. sip knowledge
6. asked package and expected..
---
1. About any project, i went with software updated with distribution in local network, alexa. by telling the problem statement of software update in customer place, handfree calling.
2. Explain about Smart pointers
3. Use of Virtual functions, pure virtual functions.
4. logical questions sum of numbers from 1 to 99  and one number missed how to ind the missed number, her expectation is (n*(n+1))/2.. then substract each element from it.remianing sum value is simmed number.
5. How to remove repeated sequence of charector example: abcbcdekeklmn -> abcdeklmn
6. Samsung SDAL layer project
-----------------------
1. When new project comes whats your approach as a architect.
2. Any tools used for derieving requirements finalizing.
3. Why still you are working as a developer?
4. When you need to outsource UI project what and all you will consider for client to deliver, how u will give ur requirement to him.
5. How you will give project health to management as a architect, as jirs task etc will be already will be provided by prgram manager status.
6. SOLID principles. : here his expactation is some keywords are usefull  like TDD etc.
7. How big is ur team, How is org structure.
8. Why you wil fit for this role?
-----------
1. Whats the projects, whats ur role in terms of technical.
2. Rate ur self is c, c++, linux, Go language, docker, DS.
3. If u given an DLL can you sort list with efficient approach.
4. swap 2 variables without using 3rd variable.
5. What all layers in OSI model.
6. What is difference between mac and ip address.
7. you have device other end device both end each is connected to switch, switch connected to router. as soon as device connection to device communication what all techincal aspects will happen?
8. Docker virtualization?
9. smart pointer
---------------
1. Piepline jenkins and classic jenkins ?
2. Difference between array and List in Python
3. How to specify one binary depends on other binary in makefile?
4. .Phony usage?
5. Dockers and containers?
6. Hardware speed speed is 50Mhz and Driver intrrupts came how you will calculate the uinturrupt handling time accurately.
7. you did git add, git commit for 10 files, from that before push command, one file should revert from commit what are the git steps with syntax?
8. what is the exact difference between git rabese and merge? syntax for each?
9.
----------------
1. You have 100MB allocated using new in heap, which is used by multiple threads and in one thread you need 50MB how you can assign from existing 100MB new pointer means he wanted how you will reallocate memoryin c++ using new?
2. What is the thrtead communication mechanism
3. How Qt is working with Android platform
4. 3liters, 5 liters jar to compute 4 liter puzzle
5. Thread lights finding which one without entering the room with switch ...
6. You have Cube of 4cm, and painted all the sides with green color, if i cut to 1cm cubes from it.. how many cubes will come without colors? how many 1cm cubes will come?
7. What is openembedded build framweork how it will work?
8. OOPS concepts
9. Ubutntu device partitions information how many partitions , how u will do partions commands for it. can we add more partitions?
10. some c++ code snippet to find out the problem
11. Mutex, threads and conditional way to control flow related
12. IPC
13. openGL related idea.
14. Why are u looking change?
-----------------
1. C++ oops concepts
2. EMpty class sze, how compiler memory structure for it if i create 20 objects?
3. polymorphism , compile time  and runtime? compile time how it will call corresponding in compiler level? any idea how compiler achieving compile time polymorphism?
4. Vtables and vptr with example classes to write.
5. if i have 10 base class pointers with 10 derieved classes are assigned having virtual in base. how many vtabled will get created?
6. virtual keyword for method, will be needed in derieved class method also ? if i keep what will h appen?
7. what is pure virtual ?
8. exeption handling related
9. What is the diffrence b/w expection handling and error handling?
10. WHat all design patterns used?
11. Multithreaded environment, singletern pattern code?
12. what tis future and  promise?
13. is multithreading is good or multiprocess is good why? How you will choose one over the other?
14. Socket programming simple usages?
15.
---------------
1. Understanding overall c++ skills with basic questions similar like oracle in telephonic
2. Wriiten test having code snioppet to find the issue and correct in given code having below issues
  - Dimond problem intensionally not placed virtual key board to resolve dimond problem.  
  - lock gaurd to use insteed mutex
  - order of locking, unlocking
  - private data  accsing after reinterpretaion cast
  - array outof bond issue
  - with the new allocated memory for array of objects but in delete not mentioningh braces for delete [].
  -
Below allproblems needs to write use case diagram, class diagram and code: and scalable, flexible, memory constraint devices...
3. Design a pattern and classes interface for Tweet like deleteTextTweet, deleteMultimediaTweet, readTextTweet, readMultimediaTweet, createTextTweet, createMultimediaTweet.
4. Have a UI page having id, name and address to be enter when same save page  should show that data and when cancel blank in that fields .. design his idea to write Qt code using MVC pattern...
5. What all solid principles, what is Dependency invserion principle needs to explain with code writing.
6. What all compilation stages for Qtapplications, expectation from is to specify moc for signals and slots
7. Hwo signals and slots implemented internally?
8. WHat all stages to use Qt in compiler ?
9. How to package Qt apk?
10. on screen images are playing with 50framesperseconds, bubbles are in screen when u click one one bubble that bubble get hilighted with + simbol on it.. design architecture.
11. from one hardware frmaes are commming continusly, ur process shoudl take the frmaes do some process on it and give it for display , no frameloss, stack memory is 128MB, each frame size is 4MB... design a system?
12. What Agile proces  and review process, CI/CD
13. INnovation
14. Design process for project execution
15. Hoq QT and android?
16 Why QT on android?
17. WHat all IPCS for where to use?
18. Design principles and patterns?
19. What are architectural processes and tools?
20.

1. How to transfer a sentence from one to other end over the noise wire
 "Bhaskar is a very good engineer"  ... Interviewer expectation is repeat the sentence with key words to reduce the bandwidth i.e in repeate Bhaskar good Enginner.

2. What is Lamda function how it will be usefull compare to normal function and its usage scenario?
3. Difference between RISC and CISC architecture?
4. uint val = 10; uint *ptr; ptr = &val;   What is the addressing mode of execution ?? Ans: Indirect addressing mode
5. What is sampling theorem have u implemented any in codec??
6. Any filter implementation and its use?
7. Who will invoke the scheduler? Interviewer looking for exact term from kernel
8. What is the Zombie process?
9. For multiplication or to find the number is even or not ? how u will write ?? Interviewer expectation to write using bitwise operator not binary operators.
10.coding practice questions like 1== a for comparision is  better or a == 1?
11.balanced tree is better or unbalanced tree is better? why?
12.How Memcpy will be implemented? if address overlapping occurs how u will handle.
14.How services will interact in android? why android will use binder even otherlinux ipc are present?
15.How service will work in android?
16.What are smart pointers?
17.How week pointer will be usefull
18.Wat is static assert?
19.What is dimond problem how it solved in c++?
20.Memory allocation methods and default value?  malloc, calloc, new, realloc???
21.With the micro controller IO ports how to generate the ramp wave??
22.What is the string and string builder in c++
23.

design pattern
c++,
c++11
role exact how may people

-------------------
thread pool
object cache implementation
LRU cache
Tree traversal in  order S
Guven an array  of numbers, and a number, find if array has sum of the given number (by adding 2 numbers)
string anagram
singleton
------------------------
- vector vs array
- vector -vs List
- linked list define structure
- Linked list intersected with other linked list
- print BST left view

- auto, lamda syntax
- projects
- design pattern
- how t0 analyze core dump
- most dificult roblem solved
-
- linux os internels IPC
- linux crash debugging
- n/w protocol

---- given an array of x, y cordinates,find the nearest, loangest cordinates from the origin 0,0 ? l = sqrt(x2+y2) sort all of them
- thread pooling like number of reuqtest are more and queue is lesser size...
- linked list reverse
- print event , odd
- challenging problem
- c++, system experience

-----
-What is little endian/big endian?
-Generic projects details
- when to use shared memory
-why pipe not used in server media received to process same media can be given to process.
- multi process vs multi thread difference.
---
- program to align 0's to left and 1's to right in bool array.
- unique pointer usage and write a program to demonstrate
- change unique pointer code to smart pointer and how it wil behave.
-what are multithreading, how to control, protect data.
- what are all IPC and and how it works
- what is lamda function how to use them.
- stl'd details.

---
- What are socket apis
- Difference b/w udp, tcp
- voip, sip is in which layer protocol
- What all details in sock file descriptor
- what is select and how its use full
- generic multithreading details
- how cloud comm happens what all protocls used to communicate.
----

- reverse string
-c++11 concepts like smart pointers, lamda, auto, stl's
-sub string finding
- convert java code to c code having interface class and derived classed A, B.. object of A, B will be there, so here main intension of writing struct and function pointer in struct members.
- nibble swap in a byte.
- c language reference variable effect in called to callee

--
- write a code having copy constructor, operator constructor, destructor and class having pointer data members.
- Stack implementation using templates, but stack not limit capacity it should grow like how much we use.
- c++ 11 concepts.
-ipc concepts in details.



Thursday, April 30, 2020

Generic

Cache :
Caching is a process of storing the frequently - accessed data temporarily in a cache so we can re use in next time request.

Monday, January 27, 2020

OOPS

OOPS concepts----------------------------------------------------------------------------
OOP programming  runs around below keywords.
Object oriented programming used to simulate real life things into your code is called as OOP.

OOPS 2 entities:
Class:(blue print of object) is a specification of object
       It is having name, properties and methods.
Ex: College student details: attributes/data members and functions and method.
Object: (instance of a class) is a peace of code which represent the real life entity.
      Object will be having its own identity,  properties and behavior.
Ex: the black[property] dog[object] barks[behavior]
Ex: TennisCourt:  here court is object Court has attributes like color, surface, dimensions etc called variables/data members, and functionality like court booking, court cleaning etc called functions/methods.
Change in attributes will change the behavior of members object.

OOPs 4 Principles:
  • Encapsulation: Encapsulation is the mechanism of binding the data and methods together and hiding it from the outside world. Encapsulation is achieved by keeping its state private(access specifiers) so that other objects don’t have direct access to its state. Instead, they can access this state only through a set of public methods. Ex: Capsul tablet which is encapsulated with medicine and chemicals. 
  • Abstraction: Abstraction will help by hiding internal implementation details of objects and only exposes the operations that are relevant to objects. Ex: Student can have multiple attributes but we take which are relevant for college i.e name , age and rool number, as college no need to know his girl friends names, his child hood school friends etc i.e means only relevant for that business.
  • Inheritance: Inheritance is the mechanism of creating new classes from existing ones. Which help in reusability. It works on relationship. One object aquires the properties of other object.
  • Polymorphism: Polymorphism (from Greek, meaning “many forms”) is the ability of an object to take different forms and thus, depending upon the context, to respond to the same message in different ways. Take the example of a chess game; a chess piece can take many forms, like bishop, castle, or knight and all these pieces will respond differently to the ‘move’ message.
Design-----------------------------------
Design always should follow 3 things
DRY- Donot repeat yourself
Divide and Conquer - Code keep on adds in a file its difficult to change
Expect Change - Always new features should come which requires the change.

SOLID:
S - Single responsible principle : Class should have only one reason to change, it can change multiple times but it should change for only with its context of cricket.
Ex: Sachin performance only  vary because of sporting reasons not because of BCCI president changed, or Needs to do act in add i.e this things should not be the reasons to change.

O - Open for extension and closed for modification : Create interface and add concrete classes for each type of base interface. Ex; IPayment  having makePayment() function and all sub classes like CardPayment , Cash payment, should have makepayment method so tomorrow if online payment comes without changing existing concrete classes we will add OnlinePayment class with make paymethod where u need use name password for login and then pay.

L- Liskov's substitution principle: Child class should able to substitute the base class functionality.
Example; Don and his 3 sons , one son is not his actual son his neighbor son who is a cook, then he will not replace which  his father dead, instead of killing enemies he will serve tea, copy etc.

I- Interface Segregation Principle : Do not force any client class should not implement interface which is irrelevant for it.

D- Dependency inversion principle: Responsibility of creation of dependency object should outsource to the consuming class.

Practices-----------------------------------------------------------------------
OO Analysis and Design means: Identifying the objects for the problem, and relation between the objects and providing the interface for each object.
1. Collect the requirement
Write all the requirement with pen and paper/white board or use tools/ system.

   - functional requirement: what is the needed the application look, work what all boundaries.
   Non func. requirement:
     - performance requirement
     - Security requirement : data access etc
     - documentation, support
 Map the requirements to technical description.
    Use cases:
          - title:  short description of use cases,
          - actor : user how he interact with use cases (who all involved, like system admin class, person class, data base etc.)
          - scenario: how the scenario works.
   
2. Describe the system in brief  or write the wire frames.
3. Identify the classes
4. create the uml diagrams[sequence, use case, etc] 
UML:(Unified modeling language) :
    Graphical notation to easily can be conveyed the business/ software system.

Many diagram types:class diagram, Object diagram, use case diagram, activity diagram, sequence diagram and state/communication/interaction diagram.
Association, multiplicity, aggregation, composition, Generalization, Dependency and abstract class.
use case diagram: Oval on center of diagram with use case name in oval,  include  and  extend string.
class diagram: has a rectangle with 3 horizontal section upper section shows the class name, middle section shows properties/ variables/data of class and lower section contains method/function names called class operations. 
Sequence diagram: across the top of your diagram, identify the class instances(object) by putting each class inside a box. 

Tuesday, January 21, 2020

Design Patterns

Pattern are for Flexible, Maintainable and extendable, i.e Add new features, replace feature and remove feature should not be complex.


Creational: Object Instantiation
Structural : Class relationship and Hierarchical
Behavioral:  Objects intercommunication

Creational:
Factory Method:
Provides an interface to create an object but defers the creation to the sub class.
- Creates an object base don run time parameter
- Do not need to know which objects you will needed to create.

Friday, November 15, 2019

Networking

Basic networking knowledge.
DNS:
DHCP:
TCP/IP stack :
TCP/UDP header:
Unicast :
Multicast :
Broadcast :
Socket program:  https://www.softprayog.in/programming/interprocess-communication-using-unix-domain-sockets

IP and its classes:
Example programs:
1. file transfer using socket
2. broadcast a message in LAN using socket
3.  select, epoll with socket

https://www.tenouk.com/Module39.html 

Android

Intent:
Activity:
Service:
Binder:
JNI:
Android Make:

Linux process and IPC :

Process:
  fork()
  exec()
  PCB?
 
IPC:
1. Pipes

              Pipe Always unidirectional, used between parent and child communication.
Data written to the write end of pipe can be read from read end of pipe.

int fd[2];
Pipe created  two file descriptors using pipe system call. i.e pipe(fd); returns -1 on failure.
fd[0] - read, fd[1] - write its similar for file descriptors regular read()/write() api's used for read and write.
 - Data written to the write end of the pipe is Buffered by the Kernel until it is read from the read end of the pipe.
- If process attempt to read from empty pipe then read will block until data available.
- If process attempt to write on a non empty(full data in pipe) write api will block until sufficient data  has been read from pipe to allow the write to complete.
- Non blocking I/O possible by setting the system flag O_NONBLOCK using fcntl system api.
- The communication channel provided by pipes is a byte stream, there is no concept of message   boundaries.
- If write end fd[1] closed and read end try to read it will see EOF and read returns 0.
- If read end fd[0] closed and write will cause the SIGPIPE signal will generate to calling process, if process ignore the signel write will fail with the EPIPE error.
 -  close api should be used to close the file descriptors close(fd[0]/fd[1]).
- the capacity of the pipe is same as the system page size. 
Disadvantage: its works only related process i.e fd file descriptors should be within a process.
---------
how the kernel implements pipes (at a high level).
  • Linux has a VFS (virtual file system) module called pipefs, that gets mounted in kernel space during boot
  • pipefs is mounted alongside the root file system (/), not in it (pipe’s root is pipe:)
  • pipefs cannot be directly examined by the user unlike most file systems
  • The entry point to pipefs is the pipe(2) syscall
  • The pipe(2) syscall is used by shells and other programs to implement piping, and just creates a new file in pipefs, returning two file descriptors (one for the read end, opening using O_RDONLY, and one for the write end, opened using O_WRONLY)
  • pipefs is stored using an in-memory file system

---------

2. Fifo - named pipes:
     The disadvantage of pipe is within a process fd will be used for read write, to over come that named pipes i.e to communicate with unrelated process by providing the name to the pipe.

    Command line api's to create the mkfifo, mknod.

Ex1: mkfifo file_name will create the file of type pipe.
Ex2: mknod filename p create the file with type pipe, here p denotes the pipe type file. as mknode used for other types like char file creation etc.
Main features: FIFO is the name within a file system, it can be open just like normal files, as same as read/write for reading writing.
- Blocking on fifo is open for read but it will block until write writes to it and vice versa for write.
 - cab be unblock using O_NONBLOCKING flag in open system call.
Disadvantage: pipes follows strict FIFO behaviour.

3. System V message queues(MQ):
      Message Queue can behave similar to pipes but FIFO order can be changed.
      MQ having system wide limit which can be seen with ipcs -l command '
       ------ Messages Limits --------
max queues system wide = 32000
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

      MQ is a sequence of messages each which has two parts:
      1. the payload, which is an array bytes.
      2. A type, positive number, will be helpful to flexible retrieval(based on need, not like FIFO order to read).
    To create the message queue, we need ipc key, key can be created using ftok() api.
    #include <sys/types.h>
#include <sys/ipc.h>

key_t ftok (const char *pathname, int proj_id);


      pathname must be existing accessible file, content of file is immaterial.
      proj_id  lowest 8 bits of project id will be used and it should be not zero.

     msgget: system call gets the message queue identifier for the given key.
     #include <sys/msg.h>
        int msgget (key_t key, int msg_flags);

         key : is received from ftok. some times special key IPC_PRIVATE used for private mq.details
         msg_flags: IPC_CREATE for new key, if it order with IPC_EXCL for already existing q with permissions with octal values.
     msget returns integer identifier, will be used for next send receive and control message api's.

    msgctl: with this can do control operations on message queue.
      int msgctl (int msqid, int cmd, struct msqid_ds *buf);
      msqid: qid returned from msgget.
      cmd: cammand can be IPC_RMID, IPC_STAT and IPC_SET. will use IPC_RMID to remove message queue once its usage finishes.more on msgq buffer info

    msgsnd : sending messages to message queue.
int msgsnd (int msqid, const void *msgp, size_t msgsz, int msgflg);
       msgp: message buffer  
       msgsz : message size
       msgflg: it can be 0, or IPC_NOWAIT.
msgrcv: receiving message from message queue.
ssize_t msgrcv (int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
    
       msgp: message buffer    
       msgsz : message size
        msgtype : type of message to be retrieved.
       msgflg: it can be 0, or IPC_NOWAIT.
Example code :
//----keep below data in queue.h----------
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <stdlib.h>
#include <string.h>


#define projid 123
#define PathName  "/tmp/myqueue" /* any existing, accessible file would do */
#define MsgLen    4
#define MsgCount  6


typedef struct {
  long type;                 /* must be of type long */
  char payload[MsgLen + 1];  /* bytes in the message */
} queuedMessage;
void report_and_exit(const char* msg) {
  perror(msg);
  exit(-1); /* EXIT_FAILURE */
}

//------------------------------------------
// Inserver server.cpp:
#include "queue.h"

int main() {
  key_t key = ftok(PathName, ProjectId);
  if (key < 0) report_and_exit("couldn't get key...");
  int qid = msgget(key, 0666 | IPC_CREAT);
  if (qid < 0) report_and_exit("couldn't get queue id...");
  char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
  int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
  int i;
  for (i = 0; i < MsgCount; i++) {
    /* build the message */
    queuedMessage msg;
    msg.type = types[i];
    strcpy(msg.payload, payloads[i]);
    /* send the message */
    msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
    printf("%s sent as type %i\n", msg.payload, (int) msg.type);
  }
  return 0;
}

//client.cpp:
#include "queue.h"
int main() {
  key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
  if (key < 0) report_and_exit("key not gotten...");
  int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
  if (qid < 0) report_and_exit("no access to queue...");
  int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
  int i;
  for (i = 0; i < MsgCount; i++) {
    queuedMessage msg; /* defined in queue.h */
    if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
      puts("msgrcv trouble...");
    printf("%s received as type %i\n", msg.payload, (int) msg.type);
  }
  /** remove the queue **/
  if (msgctl(qid, IPC_RMID, NULL) < 0)  /* NULL = 'no flags' */
    report_and_exit("trouble removing queue...");
  return 0;
}


Problem with the pipes, fifos and message queues, the work involved in sending data from one process to another is like this. Process P1 makes a system call to send data to Process P2. The message is copied from the address space of the first process to the kernel space during the system call for sending the message. Then, the second process makes a system call to receive the message. The message is copied from the kernel space to the address space of the second process. 

The shared memory mechanism does away with this copying overhead.


4. shared memory
Fasted IPC is shared memory.
 Shared memory segment created by the kernel and mapped to the data segment of the address space of the requested process.
Shared Memory

To use system V IPC like above message queues, need system v ipc key, which can get with ftok api.
#include <sys/types.h>
#include <sys/ipc.h>
key_t ftok (const char *pathname, int proj_id);


shmget: it gets the shared memory segment associated with key.

#include<sys/shm.h>
int shmid = shmget(key_t key, size_t size, int shmflgs); 
key - obtained with ftok or it can be IPC_PRIVATE.
size - size of the shared memory segment to be created, its a multiples of PAGE_SIZE.
shmflgs - IPC_CREATE | 0660 permissions.

shmat: With this calling process can attach the shared memory  segment with shmid got from shmget.
void *shmat (int shmid, const void *shmaddr, int shmflg);
 shmid - from shmget return value.
shmaddr - it can be null or process can specify the address at which the shared meory segment should be attached with shmaddr.
shmflg - SHM_RDONLY there are more flags..
 On error returns 1, on success returns shared memory address.

shmdt: with this detach the shared memory segment from calling process  address space.
int shmdt (const void *shmaddr);
shmaddr - shared memory address space returned at shmat
 on return -1 for error, 0 on success.

shmctl: to control the share memory segment.
int shmctl (int shmid, int cmd, struct shmid_ds *buf);
shmid - from shmget.
cmd - IPC_STAT, IPC_RMID etc.
shmid_ds is a structure to control shared meory segment params.

return -1 on error


5. Shared File
      
6. semaphore
7. pthreads
8. mutex
9. conditional variables

10. unix socket for local IPC 
     https://www.softprayog.in/programming/interprocess-communication-using-unix-domain-sockets

system calls:

Kernel and its internals:

Driver programming:


Port forwarding issues and port collusion by more then one process

Port forwarding script to identify collusions  Identifying local and remote port forwarding processes, as well as checking for potential col...