Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 66 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,23 @@
An open-source, cross-platform powerful network analysis tool for discovering websites hosted on specific IP addresses and ASN ranges.

## Features
- ASN scanning (Autonomous System Number)

- ASN scanning (Autonomous System Number) with IPv4/IPv6 support
- IP block scanning (CIDR format)
- HTTPS/HTTP support
- DNS resolution
- HTTPS/HTTP automatic fallback
- Firewall bypass techniques (IP shuffling, header randomization, jitter)
- Proxy support (HTTP/HTTPS/SOCKS5)
- Custom DNS servers
- Rate limiting (token bucket algorithm)
- Dynamic timeout calculation
- Text and JSON output formats
- Configurable concurrent workers (1-1000)
- Real-time progress bar
- Graceful interrupt handling with result export
- Graceful Ctrl+C handling with result export

## Installation

Download the latest version from [Releases](https://github.com/sercanarga/ipmap/releases) and run:
Download the latest version from [Releases](https://github.com/lordixir/ipmap/releases) and run:

```bash
unzip ipmap.zip
Expand All @@ -25,23 +30,27 @@ chmod +x ipmap
## Usage

### Parameters

```bash
-asn AS13335 # Scan all IP blocks in the ASN
-ip 103.21.244.0/22 # Scan specified IP blocks
-d example.com # Search for specific domain
-t 200 # Request timeout in milliseconds
-t 2000 # Request timeout in milliseconds (auto-calculated if not set)
--export # Auto-export results
-format json # Output format (text or json)
-workers 100 # Number of concurrent workers
-workers 100 # Number of concurrent workers (default: 100)
-v # Verbose mode
-c # Continue scanning until completion
-proxy http://127.0.0.1:8080 # Proxy URL (HTTP/HTTPS/SOCKS5)
-rate 50 # Rate limit (requests/second, 0 = unlimited)
-dns 8.8.8.8,1.1.1.1 # Custom DNS servers
```

### Examples

**Scan ASN:**
**Basic ASN scan (auto timeout):**
```bash
ipmap -asn AS13335 -t 300
ipmap -asn AS13335
```

**Find domain in ASN:**
Expand All @@ -51,70 +60,75 @@ ipmap -asn AS13335 -d example.com

**Scan IP blocks:**
```bash
ipmap -ip 103.21.244.0/22,103.22.200.0/22 -t 300
```

**Export results:**
```bash
ipmap -asn AS13335 -d example.com --export
ipmap -ip 103.21.244.0/22,103.22.200.0/22
```

**High-performance scan:**
```bash
ipmap -asn AS13335 -workers 200 -v
```

## Proxy Usage

ipmap supports HTTP, HTTPS, and SOCKS5 proxies for anonymous scanning and bypassing network restrictions.
**Export results:**
```bash
ipmap -asn AS13335 -d example.com --export
```

### Proxy Parameters
**JSON output:**
```bash
-proxy http://127.0.0.1:8080 # HTTP proxy
-proxy https://127.0.0.1:8080 # HTTPS proxy
-proxy socks5://127.0.0.1:1080 # SOCKS5 proxy
-rate 50 # Rate limit (requests/second)
-dns 8.8.8.8,1.1.1.1 # Custom DNS servers
ipmap -asn AS13335 -format json --export
```

### Proxy Examples
## Proxy & Rate Limiting

ipmap supports HTTP, HTTPS, and SOCKS5 proxies for anonymous scanning.

**Basic HTTP proxy:**
**HTTP proxy:**
```bash
ipmap -asn AS13335 -proxy http://127.0.0.1:8080
```

**SOCKS5 proxy with Tor:**
**SOCKS5 proxy (Tor):**
```bash
ipmap -asn AS13335 -proxy socks5://127.0.0.1:9050
```

**Proxy with authentication:**
**Proxy with auth:**
```bash
ipmap -asn AS13335 -proxy http://user:password@proxy.example.com:8080
ipmap -asn AS13335 -proxy http://user:pass@proxy.com:8080
```

**Proxy with rate limiting:**
**Rate limiting:**
```bash
ipmap -asn AS13335 -proxy http://127.0.0.1:8080 -rate 50
ipmap -asn AS13335 -rate 50 -workers 50
```

**Proxy with custom DNS:**
**Full configuration:**
```bash
ipmap -asn AS13335 -proxy socks5://127.0.0.1:1080 -dns 8.8.8.8,1.1.1.1
ipmap -asn AS13335 -d example.com -proxy http://127.0.0.1:8080 -rate 100 -workers 50 -dns 8.8.8.8 -v --export
```

**Full configuration example:**
```bash
ipmap -asn AS13335 -d example.com -proxy http://127.0.0.1:8080 -rate 100 -workers 50 -v --export
```
> **Note:** When using proxies, reduce worker count and enable rate limiting to avoid overwhelming the proxy.

## Firewall Bypass Features

> **Note:** When using proxies, consider reducing the worker count (`-workers`) and enabling rate limiting (`-rate`) to avoid overwhelming the proxy server.
ipmap includes built-in firewall bypass techniques:

- **IP Shuffling:** Randomizes scan order to avoid sequential pattern detection
- **Header Randomization:** Rotates User-Agent, Accept-Language, Chrome versions, platforms
- **Request Jitter:** Adds random 0-50ms delay between requests
- **Dynamic Timeout:** Auto-adjusts timeout based on worker count

## Interrupt Handling (Ctrl+C)

Press Ctrl+C during scan to:
1. Immediately stop all scanning
2. View found results count
3. Option to export partial results

## Building

```bash
git clone https://github.com/sercanarga/ipmap.git
git clone https://github.com/lordixir/ipmap.git
cd ipmap
go build -o ipmap .
```
Expand All @@ -125,6 +139,19 @@ go build -o ipmap .
go test ./... -v
```

## Changelog (v2.0)

- ✅ Added IP shuffling for firewall bypass
- ✅ Added request jitter (0-50ms random delay)
- ✅ Added header randomization (language, chrome version, platform)
- ✅ Fixed Ctrl+C interrupt handling (immediate stop)
- ✅ Added dynamic timeout calculation based on workers
- ✅ Added IPv6 support for ASN scanning
- ✅ Improved error logging
- ✅ Fixed result collection bug with high workers
- ✅ Removed gzip to fix response parsing
- ✅ Added scan statistics at completion

## License

This project is open-source and available under the MIT License.
27 changes: 23 additions & 4 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import (
"strconv"
"strings"
"syscall"
"time"
)

var (
Expand Down Expand Up @@ -46,7 +47,7 @@ func main() {
}

// Setup interrupt handler
interruptData = &modules.InterruptData{}
interruptData = modules.NewInterruptData()
setupInterruptHandler()

// Log configuration if verbose
Expand Down Expand Up @@ -84,9 +85,19 @@ func main() {
return
}

// Set default timeout if not specified and no domain to calculate from
if *timeout == 0 && *domain == "" {
fmt.Println("Timeout parameter( -t ) is not set. By entering the domain, you can have it calculated automatically.")
return
// Base timeout: 2000ms, scale up for high worker counts
baseTimeout := 2000
if *workers > 200 {
// Add extra time for high concurrency (network saturation)
baseTimeout = baseTimeout + (*workers / 100 * 500)
}
if baseTimeout > 10000 {
baseTimeout = 10000 // Max 10 seconds
}
*timeout = baseTimeout
config.InfoLog("Using auto-calculated timeout: %dms (workers: %d)", *timeout, *workers)
}

if *domain != "" {
Expand Down Expand Up @@ -130,7 +141,15 @@ func setupInterruptHandler() {

go func() {
<-sigChan
fmt.Println("\n\n[!] Scan interrupted by user")
fmt.Println("\n\n[!] Scan interrupted by user - stopping...")

// Signal all goroutines to stop immediately
if interruptData != nil {
interruptData.Cancel()
}

// Give goroutines a moment to stop
time.Sleep(100 * time.Millisecond)

if interruptData != nil && len(interruptData.Websites) > 0 {
fmt.Printf("\n[*] Found %d websites before interruption\n", len(interruptData.Websites))
Expand Down
42 changes: 37 additions & 5 deletions modules/interrupt_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,43 @@ import "sync"

// InterruptData holds scan data for interrupt handling
type InterruptData struct {
Websites [][]string
IPBlocks []string
Domain string
Timeout int
mu sync.Mutex
Websites [][]string
IPBlocks []string
Domain string
Timeout int
Cancelled bool // Flag to indicate cancellation
CancelCh chan struct{} // Channel to signal cancellation
mu sync.Mutex
}

// NewInterruptData creates a new InterruptData with initialized cancel channel
func NewInterruptData() *InterruptData {
return &InterruptData{
CancelCh: make(chan struct{}),
}
}

// Cancel signals all goroutines to stop
func (id *InterruptData) Cancel() {
if id == nil {
return
}
id.mu.Lock()
defer id.mu.Unlock()
if !id.Cancelled {
id.Cancelled = true
close(id.CancelCh)
}
}

// IsCancelled returns whether the scan has been cancelled
func (id *InterruptData) IsCancelled() bool {
if id == nil {
return false
}
id.mu.Lock()
defer id.mu.Unlock()
return id.Cancelled
}

// AddWebsite safely adds a website to the interrupt data
Expand Down
45 changes: 39 additions & 6 deletions modules/request.go
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,16 @@ func createDialContext() func(ctx context.Context, network, addr string) (net.Co
}

func createHTTPClientWithConfig() *http.Client {
// Calculate connection pool size based on worker count
maxConns := config.Workers
if maxConns < 100 {
maxConns = 100
}
maxConnsPerHost := maxConns / 10
if maxConnsPerHost < 10 {
maxConnsPerHost = 10
}

transport := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
Expand All @@ -138,14 +148,16 @@ func createHTTPClientWithConfig() *http.Client {
tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
},
},
MaxIdleConns: 100,
MaxIdleConnsPerHost: 10,
MaxIdleConns: maxConns,
MaxIdleConnsPerHost: maxConnsPerHost,
MaxConnsPerHost: maxConnsPerHost * 2, // Allow more active connections
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ResponseHeaderTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
DialContext: createDialContext(),
ForceAttemptHTTP2: true,
DisableKeepAlives: false, // Keep connections alive for reuse
}

// Configure proxy if specified
Expand Down Expand Up @@ -210,18 +222,39 @@ func RequestFuncWithRetry(ip string, url string, timeout int, maxRetries int) []
ua := uarand.GetRandom()
req.Header.Set("User-Agent", ua)
req.Header.Set("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
req.Header.Set("Accept-Language", "en-US,en;q=0.9,tr;q=0.8")
req.Header.Set("Accept-Encoding", "gzip, deflate, br")

// Randomize Accept-Language to avoid fingerprinting
languages := []string{
"en-US,en;q=0.9",
"en-GB,en;q=0.9",
"en-US,en;q=0.9,tr;q=0.8",
"de-DE,de;q=0.9,en;q=0.8",
"fr-FR,fr;q=0.9,en;q=0.8",
}
req.Header.Set("Accept-Language", languages[time.Now().UnixNano()%int64(len(languages))])

req.Header.Set("Accept-Encoding", "identity") // No compression to avoid decompression issues
req.Header.Set("Connection", "keep-alive")
req.Header.Set("Upgrade-Insecure-Requests", "1")
req.Header.Set("Sec-Fetch-Dest", "document")
req.Header.Set("Sec-Fetch-Mode", "navigate")
req.Header.Set("Sec-Fetch-Site", "none")
req.Header.Set("Sec-Fetch-User", "?1")
req.Header.Set("Cache-Control", "max-age=0")
req.Header.Set("Sec-Ch-Ua", `"Chromium";v="120", "Not_A Brand";v="24"`)

// Randomize browser version fingerprint
chromeVersions := []string{
`"Chromium";v="120", "Not_A Brand";v="24"`,
`"Chromium";v="119", "Not_A Brand";v="24"`,
`"Chromium";v="121", "Not_A Brand";v="24"`,
`"Google Chrome";v="120", "Chromium";v="120"`,
}
req.Header.Set("Sec-Ch-Ua", chromeVersions[time.Now().UnixNano()%int64(len(chromeVersions))])
req.Header.Set("Sec-Ch-Ua-Mobile", "?0")
req.Header.Set("Sec-Ch-Ua-Platform", `"Windows"`)

// Randomize platform
platforms := []string{`"Windows"`, `"macOS"`, `"Linux"`}
req.Header.Set("Sec-Ch-Ua-Platform", platforms[time.Now().UnixNano()%int64(len(platforms))])

resp, err := GetHTTPClient().Do(req)

Expand Down
Loading
Loading