-->

ABOUT US

Our development agency is committed to providing you the best service.

OUR TEAM

The awesome people behind our brand ... and their life motto.

  • Kumar Atul Jaiswal

    Ethical Hacker

    Hacking is a Speed of Innovation And Technology with Romance.

  • Kumar Atul Jaiswal

    CEO Of Hacking Truth

    Loopholes are every major Security,Just need to Understand it well.

  • Kumar Atul Jaiswal

    Web Developer

    Techonology is the best way to Change Everything, like Mindset Goal.

OUR SKILLS

We pride ourselves with strong, flexible and top notch skills.

Marketing

Development 90%
Design 80%
Marketing 70%

Websites

Development 90%
Design 80%
Marketing 70%

PR

Development 90%
Design 80%
Marketing 70%

ACHIEVEMENTS

We help our clients integrate, analyze, and use their data to improve their business.

150

GREAT PROJECTS

300

HAPPY CLIENTS

650

COFFEES DRUNK

1568

FACEBOOK LIKES

STRATEGY & CREATIVITY

Phasellus iaculis dolor nec urna nullam. Vivamus mattis blandit porttitor nullam.

PORTFOLIO

We pride ourselves on bringing a fresh perspective and effective marketing to each project.

Showing posts with label google question hub. Show all posts
Showing posts with label google question hub. Show all posts
  • dig in Networking tools





    We talked about domains in the previous task -- now lets talk about how they work.


    Ever wondered how a URL gets converted into an IP address that your computer can understand? The answer is a TCP/IP protocol called DNS (Domain Name System). dig in Networking tools dig in Networking tools


    At the most basic level, DNS allows us to ask a special server to give us the IP address of the website we're trying to access. For example, if we made a request to www.google.com, our computer would first send a request to a special DNS server (which your computer already knows how to find). The server would then go looking for the IP address for Google and send it back to us. Our computer could then send the request to the IP of the Google server.



    Let's break this down a bit.


    You make a request to a website. The first thing that your computer does is check its local cache to see if it's already got an IP address stored for the website; if it does, great. If not, it goes to the next stage of the process.


    Assuming the address hasn't already been found, your computer will then send a request to what's known as a recursive DNS server. These will automatically be known to the router on your network. Many Internet Service Providers (ISPs) maintain their own recursive servers, but companies such as Google and OpenDNS also control recursive servers. This is how your computer automatically knows where to send the request for information: details for a recursive DNS server are stored in your router. This server will also maintain a cache of results for popular domains; however, if the website you've requested isn't stored in the cache, the recursive server will pass the request on to a root name server.


    There are precisely 13 root name DNS servers in the world. The root name servers essentially keep track of the DNS servers in the next level down, choosing an appropriate one to redirect your request to. These lower level servers are called Top-Level Domain servers.


    Top-Level Domain (TLD) servers are split up into extensions. So, for example, if you were searching for tryhackme.com your request would be redirected to a TLD server that handled .com domains. If you were searching for bbc.co.uk your request would be redirected to a TLD server that handles .co.uk domains. As with root name servers, TLD servers keep track of the next level down: Authoritative name servers. When a TLD server receives your request for information, the server passes it down to an appropriate Authoritative name server.




    Authoritative name servers are used to store DNS records for domains directly. In other words, every domain in the world will have it's DNS records stored on an Authoritative name server somewhere or another; they are the source of the information. When your request reaches the authoritative name server for the domain you're querying, it will send the relevant information back to you, allowing your computer to connect to the IP address behind the domain you requested.


    When you visit a website in your web browser this all happens automatically, but we can also do it manually with a tool called dig . Like ping and traceroute, dig should be installed automatically on Linux systems.


    Dig allows us to manually query recursive DNS servers of our choice for information about domains:
    dig <domain> @<dns-server-ip>

    It is a very useful tool for network troubleshooting.












    This is a lot of information. We're currently most interested in the ANSWER section for this room; however, taking the time to learn what the rest of this means is a very good idea. In summary, that information is telling us that we sent it one query and successfully (i.e. No Errors) received one full answer -- which, as expected, contains the IP address for the domain name that we queried.


    Another interesting piece of information that dig gives us is the TTL (Time To Live) of the queried DNS record. As mentioned previously, when your computer queries a domain name, it stores the results in its local cache. The TTL of the record tells your computer when to stop considering the record as being valid -- i.e. when it should request the data again, rather than relying on the cached copy.


    The TTL can be found in the second column of the answer section:







    It's important to remember that TTL (in the context of DNS caching) is measured in seconds, so the record in the example will expire in two minutes and thirty-seven seconds.



    Have a go at some questions about DNS and dig.


    #1 What is DNS short for?

    Ans :- Domain name system



    #2 What is the first type of DNS server your computer would query when you search for a domain?

    Ans :- Recursive



    #3 What type of DNS server contains records specific to domain extensions (i.e. .com, .co.uk, etc)? Use the long version of the name.

    Ans :- Top-level Domain


    #4 Where is the very first place your computer would look to find the IP address of a domain?

    Ans :- Local Cache


    #5 [Research] Google runs two public DNS servers. One of them can be queried with the IP 8.8.8.8, what is the IP address of the other one?

    Ans :- 8.8.4.4


    #6 If a DNS query has a TTL of 24 hours, what number would the dig query show?

    Ans :- 86400



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)


  • TryHackMe Google Dorking Walkthrough







    Google is a very powerful search engine. Use this room to learn how to harness the power of google. TryHackMe Google Dorking Walkthrough


    [Task 1] Ye Ol' Search Engine



    Google is arguably the most famous example of “Search Engines”, I mean who remembers Ask Jeeves? shudders


    Now it might be rather patronising explaining how these “Search Engines” work, but there’s a lot more going on behind the scenes then what we see. More importantly, we can leverage this to our advantage to find all sorts of things that a wordlist wouldn’t. Researching as a whole - especially in the context of Cybersecurity encapsulates almost everything you do as a pentester. MuirlandOracle has created a fantastic room on learning the attitudes towards how to research, and what information you can gain from it exactly.


    "Search Engines" such as Google are huge indexers – specifically, indexers of content spread across the World Wide Web.


    These essentials in surfing the internet use “Crawlers” or “Spiders” to search for this content across the World Wide Web, which I will discuss in the next task.



    [Task 2] Let's Learn About Crawlers


    What are Crawlers and how do They Work?


    These crawlers discover content through various means. One being by pure discovery, where a URL is visited by the crawler and information regarding the content type of the website is returned to the search engine. In fact, there are lots of information modern crawlers scrape – but we will discuss how this is used later. Another method crawlers use to discover content is by following any and all URLs found from previously crawled websites. Much like a virus in the sense that it will want to traverse/spread to everything it can.


    Let's Visualise Some Things...


    The diagram below is a high-level abstraction of how these web crawlers work. Once a web crawler discovers a domain such as mywebsite.com, it will index the entire contents of the domain, looking for keywords and other miscellaneous information - but I will discuss this miscellaneous information later.


















    In the diagram above, "mywebsite.com" has been scraped as having the keywords as “Apple” “Banana" and “Pear”. These keywords are stored in a dictionary by the crawler, who then returns these to the search engine i.e. Google. Because of this persistence, Google now knows that the domain “mywebsite.com” has the keywords “Apple", “Banana” and “Pear”. As only one website has been crawled, if a user was to search for “Apple”...“mywebsite.com” would appear. This would result in the same behaviour if the user was to search for “Banana”. As the indexed contents from the crawler report the domain as having “Banana”, it will be displayed to the user.











    As illustrated below, a user submits a query to the search engine of “Pears".
    Because the search engine only has the contents of one website that has been crawled with the keyword of “Pears” it will be the only domain that is presented to the user.




    However, as we previously mentioned, crawlers attempt to traverse, termed as crawling, every URL and file that they can find! Say if “mywebsite.com” had the same keywords as before (“Apple", “Banana” and “Pear”), but also had a URL to another website “anotherwebsite.com”, the crawler will then attempt to traverse everything on that URL (anotherwebsite.com) and retrieve the contents of everything within that domain respectively.



    This is illustrated in the diagram below. The crawler initially finds “mywebsite.com”, where it crawls the contents of the website - finding the same keywords (“Apple", “Banana” and “Pear”) as before, but it has additionally found an external URL. Once the crawler is complete on “mywebsite.com”, it'll proceed to crawl the contents of the website “anotherwebsite.com”, where the keywords ("Tomatoes", “Strawberries” and “Pineapples”) are found on it. The crawler's dictionary now contains the contents of both “mywebsite.com” and “anotherwebsite.com”, which is then stored and saved within the search engine.














    Recapping


    So to recap, the search engine now has knowledge of two domains that have been crawled:


    1. mywebsite.com
    2. anotherwebsite.com



    Although note that “anotherwebsite.com” was only crawled because it was referenced by the first domain “mywebsite.com”. Because of this reference, the search engine knows the following about the two domains:



    • Domain Name           Keyword
    • mywebsite.com          Apples
    • mywebsite.com        Bananas
    • mywebsite.com        Pears
    • anotherwebsite.com    Tomatoes
    • anotherwebsite.com    Strawberries
    • anotherwebsite.com    Pineapples



    Or as illustrated below:










    Now that the search engine has some knowledge about keywords, say if a user was to search for “Pears” the domain “mywebsite.com” will be displayed - as it is the only crawled domain containing "Pears":











    Likewise, say in this case the user now searches for "Strawberries". The domain "anotherwebsite.com" will be displayed, as it is the only domain that has been crawled by the search engine that contains the keyword "Strawberries":









    This is great...But imagine if a website had multiple external URL's (as they often do!) That'll require a lot of crawling to take place. There's always the chance that another website might have similar information as of that another website crawled - right? So how does the "Search Engine" decide on the hierarchy of the domains that are displayed to the user?






    In the diagram below in this instance, if the user was to search for a keyword such as "Tomatoes" (which websites 1-3 contain) who decides what website gets displayed in what order?








    A logical presumption would be that website 1 -> 3 would be displayed...But that's not how real-world domains work and/or are named.


    So, who (or what) decides the hierarchy? Well...



    #1 Name the key term of what a "Crawler" is used to do








    Ans :- Index

     


    #2 What is the name of the technique that "Search Engines" use to retrieve this information about websites?



    Ans :- Crawling



    #3 What is an example of the type of contents that could be gathered from a website?


    Ans :- Keywords




    [Task 3] Enter: Search Engine Optimisation




    Search Engine Optimisation


    Search Engine Optimisation or SEO is a prevalent and lucrative topic in modern-day search engines. In fact, so much so, that entire businesses capitalise on improving a domains SEO “ranking”. At an abstract view, search engines will “prioritise” those domains that are easier to index. There are many factors in how “optimal” a domain is - resulting in something similar to a point-scoring system.


    To highlight a few influences on how these points are scored, factors such as:


    • How responsive your website is to the different browser types I.e. Google Chrome, Firefox and Internet Explorer - this includes Mobile phones!


    • How easy it is to crawl your website (or if crawling is even allowed ...but we'll come to this later) through the use of "Sitemaps"


    • What kind of keywords your website has (i.e. In our examples, if the user was to search for a query like “Colours” no domain will be returned - as the search engine has not (yet) crawled a domain that has any keywords to do with “Colours”


    There is a lot of complexity in how the various search engines individually "point-score" or rank these domains - including vast algorithms. Naturally, the companies running these search engines such as Google don't share exactly how the hierarchic view of domains ultimately ends up. Although, as these are businesses at the end of the day, you can pay to advertise/boost the order of which your domain is displayed.



    - Find a good example of how websites pay to boost their domains in the search listings -



    There are various online tools - sometimes provided by the search engine providers themselves that will show you just how optimised your domain is. For example, let's use SEO Site Checkup to check the rating of TryHackMe:


    According to this tool, TryHackMe is rated as 62/100 (as of 31/03/2020). That's not too bad and it'll show the justifications as to how this score was calculated below on the page.


    But...Who or What Regulates these "Crawlers"?



    Aside from the search engines who provide these "Crawlers", website/web-server owners themselves ultimately stipulate what content "Crawlers" can scrape. Search engines will want to retrieve everything from a website - but there are a few cases where we wouldn't want all of the contents of our website to be indexed! Can you think of any...? How about a secret administrator login page? We don't want everyone to be able to find that directory - especially through a google search.




    Introducing Robots.txt...



     
    #1 Using the SEO Site Checkup tool on "tryhackme.com", does TryHackMe pass the “Meta Title Test”? (Yea / Nay)


    Ans :- Yea
     


    #2 Does "tryhackme.com" pass the “Keywords Usage Test?” (Yea / Nay)



    Ans :- Nay

     


    #3 Use https://neilpatel.com/seo-analyzer/ to analyse http://googledorking.cmnatic.co.uk:

    What "Page Score" does the Domain receive out of 100?





    Ans :-  85/100


    #4 With the same tool and domain in Question #3 (previous):

    How many pages use “flash”



    Ans :-  0


     

    #5 From a "rating score" perspective alone, what website would list first?

    tryhackme.com or googledorking.cmnatic.co.uk

    Use tryhackme.com's score of 62/100 as of 31/03/2020 for this question.



    Ans :- googledorking.cmnatic.co.uk



    [Task 4] Beepboop - Robots.txt



    Robots.txt



    Similar to "Sitemaps" which we will later discuss, this file is the first thing indexed by "Crawlers" when visiting a website.


    But what is it?



    This file must be served at the root directory - specified by the webserver itself. Looking at this files extension of .txt, its fairly safe to assume that it is a text file.



    The text file defines the permissions the "Crawler" has to the website. For example, what type of "Crawler" is allowed (I.e. You only want Google's "Crawler" to index your site and not MSN's). Moreover, Robots.txt can specify what files and directories that we do or don't want to be indexed by the "Crawler".

    A very basic markup of a Robots.txt is like the following:









    Here we have a few keywords...


    • Keyword    Function
    • User-agent    Specify the type of "Crawler" that can index your site (the asterisk being a wildcard, allowing all "User-agents"
    • Allow    Specify the directories or file(s) that the "Crawler" can index
    • Disallow    Specify the directories or file(s) that the "Crawler" cannot index
    • Sitemap    Provide a reference to where the sitemap is located (improves SEO as previously discussed, we'll come to sitemaps in the next task)


     
    In this case:


    1. Any "Crawler" can index the site


    2. The "Crawler" is allowed to index the entire contents of the site


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml



    Say we wanted to hide directories or files from a "Crawler"? Robots.txt works on a "blacklisting" basis. Essentially, unless told otherwise, the Crawler will index whatever it can find.








    In this case:

    1. Any "Crawler" can index the site



    2. The "Crawler" can index every other content that isn't contained within "/super-secret-directory/".


    Crawlers also know the differences between sub-directories, directories and files. Such as in the case of the second "Disallow:" ("/not-a-secret/but-this-is/")


    The "Crawler" will index all the contents within "/not-a-secret/", but will not index anything contained within the sub-directory "/but-this-is/".


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml


    What if we Only Wanted Certain "Crawlers" to Index our Site?


    We can stipulate so, such as in the picture below:




    In this case:



    1. The "Crawler" "Googlebot" is allowed to index the entire site ("Allow: /")

    2. The "Crawler" "msnbot" is not allowed to index the site (Disallow: /")



    How about Preventing Files From Being Indexed?


    Whilst you can make manual entries for every file extension that you don't want to be indexed, you will have to provide the directory it is within, as well as the full filename. Imagine if you had a huge site! What a pain...Here's where we can use a bit of regexing.







    In this case:


    1. Any "Crawler" can index the site


    2. However, the "Crawler" cannot index any file that has the extension of .ini within any directory/sub-directory using ("$") of the site.


    3. The "Sitemap" is located at http://mywebsite.com/sitemap.xml


    Why would you want to hide a .ini file for example? Well, files like this contain sensitive configuration details. Can you think of any other file formats that might contain sensitive information?




    #1 Where would "robots.txt" be located on the domain "ablog.com"


    Ans :-  ablog.com/robots.txt



    #2 If a website was to have a sitemap, where would that be located?


    Ans :-  /sitemap.xml



    #3 How would we only allow "Bingbot" to index the website?


    Ans :- user-agent: Bingbot




    #4  How would we prevent a "Crawler" from indexing the directory "/dont-index-me/"?
     



    Ans :- Disallow: /dont-index-me/



    #5 What is the extension of a Unix/Linux system configuration file that we might want to hide from "Crawlers"?


    Ans :- .conf





    [Task 5] Sitemaps


    Sitemaps


    Comparable to geographical maps in real life, “Sitemaps” are just that - but for websites!

    “Sitemaps” are indicative resources that are helpful for crawlers, as they specify the necessary routes to find content on the domain. The below illustration is a good example of the structure of a website, and how it may look on a "Sitemap":









    The blue rectangles represent the route to nested-content, similar to a directory I.e. “Products” for a store. Whereas, the green rounded-rectangles represent an actual page. However, this is for illustration purposes only - “Sitemaps” don't look like this in the real world. They look something much more similar to this:









    “Sitemaps” are XML formatted. I won't explain the structure of this file-formatting as the room XXE created by falconfeast does a mighty fine job of this.

    The presence of "Sitemaps" holds a fair amount of weight in influencing the "optimisation" and favorability of a website. As we discussed in the "Search Engine Optimisation" task, these maps make the traversal of content much easier for the crawler!


    Why are "Sitemaps" so Favourable for Search Engines?


    Search engines are lazy! Well, better yet - search engines have a lot of data to process. The efficiency of how this data is collected is paramount. Resources like "Sitemaps" are extremely helpful for "Crawlers" as the necessary routes to content are already provided! All the crawler has to do is scrape this content - rather than going through the process of manually finding and scraping. Think of it as using a wordlist to find files instead of randomly guessing their names!



    The easier a website is to "Crawl", the more optimised it is for the "Search Engine"




    #1 What is the typical file structure of a "Sitemap"?


    Ans :- XML



    #2 What real life example can "Sitemaps" be compared to?



    Ans :- Map



    #3 Name the keyword for the path taken for content on a website


    Ans :- Route





    [Task 6] What is Google Dorking?






    Using Google for Advanced Searching

    As we have previously discussed, Google has a lot of websites crawled and indexed. Your average Joe uses Google to look up Cat pictures (I'm more of a Dog person myself...). Whilst Google will have many Cat pictures indexed ready to serve to Joe, this is a rather trivial use of the search engine in comparison to what it can be used for.
    For example, we can add operators such as that from programming languages to either increase or decrease our search results - or perform actions such as arithmetic!









    Say if we wanted to narrow down our search query, we can use quotation marks. Google will interpret everything in between these quotation marks as exact and only return the results of the exact phrase provided...Rather useful to filter through the rubbish that we don't need as we have done so below:








    Refining our Queries


    We can use terms such as “site” (such as bbc.co.uk) and a query (such as "gchq news") to search the specified site for the keyword we have provided to filter out content that may be harder to find otherwise. For example, using the “site” and "query" of "bbc" and "gchq", we have modified the order of which Google returns the results.

    In the screenshot below, searching for “gchq news” returns approximately 1,060,000 results from Google. The website that we want is ranked behind GCHQ's actual website:










    But we don't want that...We wanted “bbc.co.uk” first, so let's refine our search using the “site” term. Notice how in the screenshot below, Google returns with much fewer results? Additionally, the page that we didn't want has disappeared, leaving the site that we did actually want!







    Of course, in this case, GCHQ is quite a topic of discussion - so there'll be a load of results regardless.



    So What Makes "Google Dorking" so Appealing?


    First of all - and the important part - it's legal! It's all indexed, publicly available information. However, what you do with this is where the question of legality comes in to play...


    A few common terms we can search and combine include:




    • Term    Action
    • filetype:
    •     Search for a file by its extension (e.g. PDF)
    • cache:    View Google's Cached version of a specified URL
    • intitle:    The specified phrase MUST appear in the title of the page




    For example, let's say we wanted to use Google to search for all PDFs on bbc.co.uk:



    site:bbc.co.uk filetype:pdf



     





    Great, now we've refined our search for Google to query for all publically accessible PDFs on "bbc.co.uk" - You wouldn't have found files like this "Freedom of Information Request Act" file from a wordlist!


    Here we used the extension PDF, but can you think of any other file formats of sensitive nature that may be publically accessible? (Often unintentionally!!) Again, what you do with any results that you find is where the legality comes into play - this is why "Google Dorking" is so great/dangerous.


    Here is simple directory traversal.


    I have blanked out a lot of the below to cover you, me, THM and the owners of the domains:



     





    #1 What would be the format used to query the site bbc.co.uk about flood defences


    Ans :- site: bbc.co.uk flood defences


     

    #2 What term would you use to search by file type?


    Ans :-  filetype


     
    #3 What term can we use to look for login pages?

     

    Ans :-  intitle: login



     

     

    Disclaimer


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)




  • What are some secret tips and tricks to search on Google?



    What are some secret tips and tricks to search on Google?




    What are some secret tips and tricks to search on Google?



    Hello guys,There are more than 3.6 million searches on Google every minute. But even today, there are many people who do not know the secrets of searching on Google. Today I am going to tell you some such secrets which can save you from wasting a lot of time. No matter what you search, you will find thousands of sites that are of no use to you. So what can we do in such a way, how to search the topic that we want and only those topics will come in search on Google. So let's start with some tricks that can make you smart search.


    If you have to do some search and you want to get information about what you have searched on Google. So you have to put + in front of your search and put your main keyword in front of it.
    For example: - how to create blog + blogger


    That is, you will only be shown how to create a blog on Blogger, other than this, Google will not give you any other information.



    How to manage time + student


    That is, how to manage time for students, Google will give you information about this only for students.





    In the same way, you can also ask by adding the sign of minus. Minus sign means things that you do not want to know about, then Google will not show you what is written next to your (minus-) sign.


    For example: -


    Benefits of wordpress blog-blogger


    That is, Google will now give you only and only information related to wordpress, in which Blogger will not have any name.


    The third trick is that when you want to contact a training center, you can search in Google in this way ("Share Market Traning Center" + Email) Now this will give you the email contacts of all those Share Market Traning Center. Apart from these, Google will not give you any other information. This can save you a lot of time. You have to do this thing when you are looking for a job or want to get a contact. If you want the right information about any one thing. Suppose there is a book or a video, then you search it in Google ("Book Name"), then you will see the same on the first page in front of you, which you would like nothing else. . What are some secret tips and tricks to search on Google?



    Listen to this trick, if you have to search for something that you are missing a little bit, then how will you search in Google. Take for example a song. Suppose this is a song (broken with a dream, some dream is heard, heard, heard, heard, heard, no one has been there), now you have forgotten something from it, you will search it this way. tips and tricks 2020



    For example: -


    "No one can break through a sieve when there is no one"


    Remember, you have to put * in the middle 'instead of what you have forgotten, the right result will come in front of you.



    Now the next trick is when you want to know about one site or read the articles of that site, how will you search if no one else does. Suppose you have to read the article of Navbharat Times only, else if not, search it like this in Google.
    For example: -



    site: name of that site



    After this, you will only get to see the results of Navbharat Times and the rest will not be seen. Now if you want to see similar site like that site then you will search in Google.


    For example: -


    Related: flipkart


    You will come across other sites like flipkart. If you like the answer, then make an appeal because one of your comments gives us inspiration to write a better answer.


    Thank you.



  • What is Hacking and what is Ethical Hacking, is it Legal or Illegal.






    What is Hacking and what is Ethical Hacking, is it Legal or Illegal.
    What is Hacking? I welcome you to my blog. guys, I try to make the post written by me hundred percent correct. So that you can get the correct information and make it easy to read. Today's topic is related to hacking. So guys, today we will know what hacking is. What are the types of hackers. How is hacking done?



    What is Hacking? 



    I welcome you to my blog. Friends, I try to make the post written by me hundred percent correct. So that you can get the correct information and make it easy to read. Bahral, ​​we go to our topic. Today's topic is related to hacking. So friends, today we will know what hacking is. What are the types of hackers. How is hacking done? What is Hacking and what is Ethical Hacking, is it Legal or Illegal.


    Nowadays the use of Computer and Smartphone is increasing very fast. It is almost impossible for people to do their work without Smartphone and Computer. Whether it is own business or working in a company / bank.


    Computer is used everywhere, computer is used for doing small work.


    Many problems have to be faced while doing the same work, in such a situation, when we are talking about computers, then the matter of cyber crime also becomes necessary.


    Friends, you must have heard about Cyber ​​Crime. If you have not heard, then I want to tell that Cyber ​​Crime is a type of crime in which hackers steal personal details or data of others using computers. Due to which people suffer heavy losses. And blackmail them and grab lakhs of rupees. Due to cyber crime many organizations have to pay crores of rupees every year due to their data being stolen.


    In the computer world, where the crime is not taking the name of stopping, how to protect the people of Longo from the files kept in their -2 computers and the data of the company and Bussiness from being hackers. So friends, we will give you the answer to this question in this post. So, you must read this article about






    what is hacking?



    Hacking means to find a weakness in a computer's system and take advantage of that weakness and hack that system. The person or person who does this Hacking, we call it Hacker. A Hacker has all kinds of knowledge related to computer, that is why he can easily hack Valunerbility from someone's computer system. On hearing the name, we realize that this is a wrong thing.


    Types Of Hacking


    Network Hacking

    This type of Hacking means that it receives all the information over the Hacker Network itself, for which many tools are available such as -Telnet, NS, Lookup, Ping, tracert, Netstat etc. Its main purpose is only to reach the network system and its operation.
    Website Hacking

    In website hacking illegally gaining control over the association of its web server and website, ie Database or Interface.

    Email Hacking


    In email hacking, Hacker creates a duplicate Phishing page, reaches the user to that phishing page, if the user puts information in it then the Email ID gets hacked. It is used in illegal works in an unauthorized manner.

    Ethical Hacking


    This type of Hacking goes into many ethical tasks. In this Hacking, First Owner's Permission is taken by hacker to find Weakness in the system and Owner is helped in removing these Weakness.


    Password Hacking 

     
    In this type of Hacking, passwords are cracked in an unauthorized manner, in which the system is hacked by stealing secret passwords kept in the computer.

    Computer System Hacking


    In this type of hacking, the hacker knows the ID and password of a computer's system and uses the computer illegally by sequre connection to it.

    He deletes 2 files sitting at one place and also steals the data. hacking news


    See Also


    For More information :- Click Here

    CEH v10 ( website ) :- Click Here

    CEH V10 ( Videos ) :- Click Here

    CEH V10 ( Videos ) :- Click Here



    What are the types of hackers?



    1. Black Hat Hacker

    Black Hat Hacker illegally gain the ID and password of your website, Computer System, Android Smartphone, Facebook etc. without your permission.

    And assert their authority over the information kept in them. Whether he deletes them or demands a ransom from the owner, Black Hat Hacker is very bloodthirsty. They do not hesitate at all to harm others. Ethical Hacking, is it Legal or Illegal.



    2. White Hat Hacker

    White Hat Hacker does hacking in an ethical way. Hackers of this category provide protection to our system, website and smartphone from being hacked. Such hackers take permission from the owner of the system and help us in protecting from the attacker. White hat hacker check the security of our website or system. This tells whether the system is a sequer or not. Finds weakness and provides sequrity. It is also called ethical hacker.



    3. Gray Hat Hacker
    Gray Hat Hacker is actually in a state of confusion. They may or may not play with anyone without permission. By the way, it can hack anyone's system to improve their skills, but they do not cause any harm but they cannot be called a white hat hacker and not a black hat hacker.





    Disclaimer


    This was written for educational purpose and pentest only.
    The author will not be responsible for any damage ..!
    The author of this tool is not responsible for any misuse of the information.
    You will not misuse the information to gain unauthorized access.
    This information shall only be used to expand knowledge and not for causing  malicious or damaging attacks. Performing any hacks without written permission is illegal ..!


    All video’s and tutorials are for informational and educational purposes only. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. We believe that it is impossible to defend yourself from hackers without knowing how hacking is done. The tutorials and videos provided on www.hackingtruth.in is only for those who are interested to learn about Ethical Hacking, Security, Penetration Testing and malware analysis. Hacking tutorials is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used.


    All tutorials and videos have been made using our own routers, servers, websites and other resources, they do not contain any illegal activity. We do not promote, encourage, support or excite any illegal activity or hacking without written permission in general. We want to raise security awareness and inform our readers on how to prevent themselves from being a victim of hackers. If you plan to use the information for illegal purposes, please leave this website now. We cannot be held responsible for any misuse of the given information.



    - Hacking Truth by Kumar Atul Jaiswal



    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)




  • Is 10 and 12th marks are important in the field of ethical hacking mainly in PCM?





    A word that is embeded in the mind of every youth in this modern period, and this words attracts these  youths so much that they cannot stop themselves and that is the word that and perhaps you will be the people from me who will get pleasure from inside by hearing the name hacking word and there will be many of you who want to become hackers, so in this modern era it it hacking. The word is very exciting.

    so, in via this article website we will know about ethical hacking, I think currently version of Ethical Hacking is in 10 ( CEHv10 ) trend and since many people are involved in preparing for the exam, we want that through this article too you can increase your knowledge in many places and share your knowledge.
    Is 10 and 12th marks are important in the field of ethical hacking mainly in PCM?




    About Hacking

    Hacking — Hacking is identifying weakness in computer systems or network to exploit its weakness gain access. Example of hacking :- using password cracking algorithm to gain access to a system.




    In mid 80s & 90s, The media termed hacking related to cyber crime as false. Some peacocks then started using the very beautiful word - before moral hacking and it has become ethical hacking. Just ridiculous.



    Media falsely related hacking to cyber crime. Some moron then started using a much pretty word — ethical to precede hacking and it’s become Ethical Hacking. Simply ridiculous.



    Cyber security training has developed a mushroom over the years. Most of them are just fake. And innocent youth who consider cyber security to be a demanding field of computer science are keen to become a hacker.



    No person can become a hacker from a street course like CEH. Nor can one become a successful hacker (LOL) by two or three years of undergraduate or diploma courses. Studying to become a successful security specialist requires a lot of sweaty hours of study and many nights of sleep with many systems.


    Those who cannot cope with the CLI should simply move away from the information security field. Also system scripting languages ​​such as bash, csh, sh, perl, python are required to write their own code to deal with the system and talk with the network. By using just the tools available in Kali Linux or using Metasploit etc., it does not mean that you are a good security expert or so-called hacker.


    Cyber security is a matter of own experience in dealing with vulnerabilities and threats. I saw many students who successfully completed a hacking course like CEH and still struggle to avoid getting stuck in simple Linux gotchas.







    Is 10 and 12th marks are important in the field of ethical hacking mainly in PCM?




    No, 10th and 12th class marks are not important in hacking career and not only 10th and 12th even graduation/post graduation marks also not important in hacking career. Is 10 and 12th marks are important in the field of ethical hacking mainly in PCM?



    You can excel in a cyber security career even without a degree, but you have the passion and determination to break into the system with your skilled mind (unlike the years of skill and patience that you have in films overnight Or do not become hackers in short time.)


    If you have a good knowledge on any one of the below

    • Network Security
    • web applications Security
    • Exploit writing
    • Reverse engineering
    • Wireless Security
    • IOT Security then no need of even degrees.

    For private companies: Your sound knowledge on concepts is irrespective of the certificate and marks you have obtained after graduation / post graduation. I know that some members (from hacking groups) excel in their hacking careers without a degree.



    For government companies : There is a systematic approach so here certificates and marks ( above 60%) matters.


    If you are passionate and enthusiastic about security try to learn above any one of concepts deeply then jobs will come after you.





    I hope you liked this post, then you should not forget to share this post at all.
    Thank you so much :-)




  • WHAT WE DO

    We've been developing corporate tailored services for clients for 30 years.

    CONTACT US

    For enquiries you can contact us in several different ways. Contact details are below.

    Hacking Truth.in

    • Street :Road Street 00
    • Person :Person
    • Phone :+045 123 755 755
    • Country :POLAND
    • Email :contact@heaven.com

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.