Apify Proxy Integration with Mobile IPs (2026 Guide)
A working engineer's guide to plugging ProxyStyler 4G/5G mobile proxies into Apify Actors. We cover the 2026 pay-per-event migration, marketplace economics, Python and JavaScript SDK configuration, LlamaIndex and LangChain integrations, and how to ship a ProxyStyler-powered Actor on the Apify Store before the rental model is retired on October 1, 2026.
What Is Apify?
Apify is the largest marketplace of pre-built web scrapers on the internet plus the cloud infrastructure that runs them. A scraper on Apify is called an Actor: a containerized program with a declared input schema, output dataset, and a billing plan that the Apify platform meters automatically. Users either consume Actors that other developers publish on the Store, or they build and deploy their own Actors and optionally monetize them.
Company Snapshot
Funding & Backers
Two Sides of the Platform
Apify operates on two sides simultaneously. Developers build Actors and monetize them on a marketplace that now contains several thousand public scrapers. End users, from solo operators to Fortune 500 data teams, run those Actors on demand without maintaining any infrastructure. Both sides share the same cloud runtime: Kubernetes nodes in multiple regions, a request queue service, key-value stores, dataset storage, and Apify's own residential and datacenter proxy pools.
For End Users
Run pre-built Actors for Instagram, TikTok, LinkedIn, Amazon, Google Maps, and thousands more
No infrastructure, no captcha solving, no proxy management required
Pay only for what you run: per-event, per-result, or per-usage
Schedule runs, trigger via webhook, export to S3, BigQuery, Zapier, Make
For Developers
Ship an Actor in Python or JavaScript and publish to the Store
Earn 80% of monthly rental fees or per-event charges
Creator Plan: $500/month of platform usage at $1/month for 6 months
Built-in dataset, request queue, key-value store, proxy, and logging
Pricing Migration 2026: Rental Is Retiring
Critical Deadlines
- April 1, 2026: No new rental Actors will be accepted on the Apify Store. All newly published Actors must use pay-per-event, pay-per-result, or pay-per-usage pricing.
- October 1, 2026: Rental pricing is fully retired. Any Actor still on a rental plan after this date stops billing users and must migrate to survive.
Apify spent most of 2024 and 2025 running the largest pricing model migration in the company's history. Rental pricing (a flat monthly fee for access to an Actor) was the original model but suffered from a classic subscription problem: users paid even when they did not run the scraper, and developers had no way to differentiate casual lookups from heavy crawls. Pay-per-event (PPE) replaces both problems with discrete billable events.
The Three Surviving Pricing Models
Pay-Per-Event (PPE)
The developer declares named events in actor.json, then calls Actor.charge(eventName, count) inside the Actor code whenever one happens. Typical events: page-opened, dataset-item-stored, api-call, captcha-solved, image-downloaded.
Pay-Per-Result
Simpler: charge a fixed amount per item pushed to the default dataset. Users love the predictability; developers find it harder to price properly when results have radically different cost profiles.
Pay-Per-Usage
The raw platform model: users pay for compute units, bandwidth, and storage that the Actor consumes. The developer earns nothing extra. Best for internal Actors or open-source community scrapers.
Why More Than 2,000 Actors Already Migrated
Apify published migration data showing that developers who switched from pay-per-result to pay-per-event saw average revenue per Actor increase because PPE lets them price each expensive event (page loads, captcha solves, external API calls) separately instead of burying them all in one result price. As of early 2026, more than 2,000 Actors on the Store have migrated to PPE voluntarily before the rental deadline forces the rest.
Developer Economics Under PPE
- Developer keeps 80% of billable event revenue
- Apify keeps 20% to cover platform, payments, and fraud detection
- Monthly payouts via Stripe Connect or Wise
- Free-tier users consume your quota, but the 80/20 split only applies to paid events
Apify Actors Explained
An Apify Actor is a Docker container plus four metadata files that turn raw code into a monetizable product. Understanding the file layout is the single biggest lever for building a scraper that is easy to ship, easy to configure, and easy to price.
actor.json
input_schema.json
Dockerfile
README.md
What Actors Actually Cost in 2026
| Actor Category | Typical Price | Pricing Model | Why That Price |
|---|---|---|---|
| Generic HTML scraper | $1.00 / 1k results | Per-result | Low compute, datacenter proxy |
| Google Maps scraper | $4.00 / 1k places | Per-result | Residential proxy required |
| LinkedIn profile scraper | $8.00 / 1k profiles | Per-event | Mobile proxy + anti-bot |
| TikTok video scraper | $6.00 / 1k videos | Per-event | 4G proxy ideal for mobile-first API |
| Instagram hashtag scraper | $5.00 / 1k posts | Per-event | Mobile proxy mandatory |
| Amazon product scraper | $3.00 / 1k items | Per-result | Residential proxy sufficient |
| SERP scraper | $2.50 / 1k SERPs | Per-event | Heavy captcha load |
| Twitter / X scraper | $10.00 / 1k tweets | Per-event | API rate limits + mobile proxy |
Creator Plan: Ship Your First Actor Almost Free
Apify's Creator Plan grants $500 of platform usage every month for the first six months at a total cost of $1 per month. That covers compute, bandwidth, storage, and Apify Proxy. The catch: you must publish at least one public Actor within those six months, and it must survive a short review.
- $500/month usage cap, rolling 30-day window
- $1/month for the first six months, then standard pricing
- Stackable with Store revenue (you keep 80% of user payments)
Apify Proxy vs ProxyStyler BYOP
Apify ships its own proxy product (Apify Proxy) with datacenter, residential, and a limited mobile pool. For most generic scraping jobs Apify Proxy is the path of least resistance. For mobile-first targets, aggressive anti-bot stacks, and anything where CGNAT trust matters, bringing your own ProxyStyler endpoint gives measurably better success rates.
Apify Proxy (native)
Zero configuration. Enable one flag in actor.json and Apify routes all traffic through its managed pool.
Datacenter pool free on paid plans
Residential pool at $8/GB
Built-in session rotation and sticky IPs
Mobile pool is small and per-country selection is limited
No dedicated-IP option for long sticky sessions
ProxyStyler BYOP
Plug a single ProxyStyler endpoint into ProxyConfiguration and scale to the Actor's concurrency limit without touching Apify's proxy pricing.
Real 4G/5G carrier IPs with full CGNAT trust
Dedicated IP per device (no shared pool)
On-demand IP rotation via HTTP endpoint
Flat monthly pricing, no bandwidth overages
Country-specific endpoints (US, UK, Europe, more)
Decision Matrix: Which One Should Your Actor Use?
| Target | Recommended Proxy | Why |
|---|---|---|
| Instagram, TikTok, Facebook mobile | ProxyStyler 4G | Mobile-first endpoints aggressively block datacenter |
| LinkedIn, Indeed, Glassdoor | ProxyStyler 4G | Strict fingerprinting + AS-number trust scoring |
| Google Search / SERP | ProxyStyler 4G or Apify residential | Heavy captcha load; mobile usually wins |
| Amazon, eBay, Walmart | Apify residential fine | Bot defenses tolerate quality residential |
| Generic news, blogs, docs | Apify datacenter | Cheapest option that still works |
| Regional price monitoring | ProxyStyler per-country | Deterministic geo-IP, no VPN fingerprint |
| API endpoints with mTLS | ProxyStyler dedicated IP | Whitelisted IP required |
Configuring Custom Proxy in actor.json
Every Actor declares its proxy requirements in actor.json. For BYOP, you either hardcode a ProxyStyler endpoint (useful when you ship a ProxyStyler-powered Actor) or you expose a proxy field in input_schema.json so the end user supplies their own. Here is the canonical shape of both.
actor.json with proxy requirement
{
"actorSpecification": 1,
"name": "proxystyler-powered-scraper",
"version": "0.1",
"buildTag": "latest",
"title": "ProxyStyler Mobile Proxy Scraper",
"description": "Scrapes target URLs through ProxyStyler 4G mobile IPs.",
"dockerfile": "./Dockerfile",
"input": "./input_schema.json",
"storages": {
"dataset": {
"actorSpecification": 1,
"views": {
"default": {
"title": "Results",
"transformation": { "fields": ["url", "title", "scrapedAt"] }
}
}
}
},
"minMemoryMbytes": 512,
"maxMemoryMbytes": 4096,
"defaultRunOptions": {
"build": "latest",
"timeoutSecs": 3600,
"memoryMbytes": 2048
},
"meta": { "templateId": "python-start" },
"pricingInfos": [
{
"pricingModel": "PAY_PER_EVENT",
"pricingPerEvent": {
"actorChargeEvents": {
"page-opened": {
"eventTitle": "Page opened",
"eventDescription": "Charged for each page the Actor visits.",
"eventPriceUsd": 0.0005
},
"dataset-item-stored": {
"eventTitle": "Result stored",
"eventDescription": "Charged for each item pushed to the dataset.",
"eventPriceUsd": 0.002
}
}
}
}
]
}input_schema.json with BYOP field
{
"title": "ProxyStyler Scraper Input",
"type": "object",
"schemaVersion": 1,
"properties": {
"startUrls": {
"title": "Start URLs",
"type": "array",
"editor": "requestListSources",
"description": "URLs the Actor will visit first."
},
"proxyConfiguration": {
"title": "Proxy",
"type": "object",
"editor": "proxy",
"description": "Choose Apify Proxy or supply ProxyStyler URLs below.",
"prefill": { "useApifyProxy": false }
},
"proxystylerProxyUrls": {
"title": "ProxyStyler proxy URLs (BYOP)",
"type": "array",
"editor": "stringList",
"description": "One or more ProxyStyler endpoints, e.g. http://user:pass@us.proxystyler.com:30000",
"default": []
},
"proxystylerRotateUrl": {
"title": "ProxyStyler rotate endpoint",
"type": "string",
"editor": "textfield",
"description": "Optional HTTP rotation URL provided in your ProxyStyler dashboard.",
"isSecret": true
}
},
"required": ["startUrls"]
}Secret Inputs
Always mark proxy credentials as isSecret: true. Apify encrypts secret input fields at rest and redacts them from run logs. Users can paste their ProxyStyler username and password without worrying that another Actor developer sees them in a shared workspace.
Python SDK Integration
The Apify SDK for Python ships a ProxyConfiguration helper that accepts a list of proxy URLs and hands out the next one each time you call new_url(). ProxyStyler endpoints fit the interface natively. The snippet below is a working Actor that pulls ProxyStyler URLs from input, rotates for each request, and emits PPE charges.
# main.py - Apify Python SDK with ProxyStyler BYOP
from apify import Actor, ProxyConfiguration
import httpx
from selectolax.parser import HTMLParser
async def main() -> None:
async with Actor:
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get("startUrls", [])
proxystyler_urls = actor_input.get("proxystylerProxyUrls", [])
rotate_url = actor_input.get("proxystylerRotateUrl")
# Build a ProxyConfiguration from ProxyStyler endpoints
if proxystyler_urls:
proxy_config = ProxyConfiguration(proxy_urls=proxystyler_urls)
else:
proxy_config = await Actor.create_proxy_configuration(
groups=["RESIDENTIAL"],
country_code="US",
)
async with httpx.AsyncClient(timeout=30) as session:
for request in start_urls:
url = request["url"] if isinstance(request, dict) else request
proxy_url = await proxy_config.new_url()
Actor.log.info(f"Fetching {url} via {proxy_url}")
response = await session.get(
url,
proxies={"http://": proxy_url, "https://": proxy_url},
headers={"User-Agent": "Mozilla/5.0 Apify/ProxyStyler"},
)
# Charge one page-opened event
await Actor.charge("page-opened")
if response.status_code == 200:
tree = HTMLParser(response.text)
title = tree.css_first("title").text() if tree.css_first("title") else None
await Actor.push_data({
"url": url,
"title": title,
"status": response.status_code,
"scrapedAt": Actor.datetime_now_iso(),
})
await Actor.charge("dataset-item-stored")
# Optional: rotate ProxyStyler IP for next request
if rotate_url:
try:
await session.get(rotate_url, timeout=10)
except Exception as e:
Actor.log.warning(f"Rotation failed: {e}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())What the SDK handles for you
Round-robin URL selection with session stickiness
Automatic retries on proxy errors (502, 504, timeouts)
Secret redaction in logs
Charge deduplication inside a single request
Graceful shutdown on platform SIGTERM
Crawlee-Python variant
For larger projects swap httpx for Crawlee-Python:
from crawlee.playwright_crawler import PlaywrightCrawler from apify import Actor, ProxyConfiguration proxy_config = ProxyConfiguration(proxy_urls=proxystyler_urls) crawler = PlaywrightCrawler(proxy_configuration=proxy_config)
Python Version Requirements
The Apify SDK for Python requires Python 3.9 or newer. As of SDK 2.x (current in 2026) async is mandatory; the old synchronous API is deprecated. Use the official base image apify/actor-python:3.12 to avoid cold-start surprises.
JavaScript SDK Integration
The JavaScript SDK mirrors the Python API almost field-for-field. Where it wins is the Crawlee ecosystem: PlaywrightCrawler, PuppeteerCrawler, CheerioCrawler, and JSDOMCrawler all accept a ProxyConfiguration instance directly.
// main.js - Apify JS SDK + Crawlee + ProxyStyler BYOP
import { Actor } from 'apify';
import { PlaywrightCrawler, ProxyConfiguration } from 'crawlee';
await Actor.init();
const input = await Actor.getInput() ?? {};
const {
startUrls = [],
proxystylerProxyUrls = [],
proxystylerRotateUrl,
maxConcurrency = 5,
} = input;
// Build the proxy configuration from ProxyStyler endpoints
const proxyConfiguration = proxystylerProxyUrls.length > 0
? new ProxyConfiguration({ proxyUrls: proxystylerProxyUrls })
: await Actor.createProxyConfiguration({
groups: ['RESIDENTIAL'],
countryCode: 'US',
});
const crawler = new PlaywrightCrawler({
proxyConfiguration,
maxConcurrency,
launchContext: {
launchOptions: {
headless: true,
args: ['--disable-blink-features=AutomationControlled'],
},
},
async requestHandler({ page, request, log }) {
log.info(`Scraping ${request.url}`);
await page.waitForLoadState('domcontentloaded');
const title = await page.title();
const html = await page.content();
await Actor.charge({ eventName: 'page-opened' });
await Actor.pushData({
url: request.url,
title,
htmlLength: html.length,
scrapedAt: new Date().toISOString(),
});
await Actor.charge({ eventName: 'dataset-item-stored' });
// Rotate ProxyStyler IP between requests when an endpoint is supplied
if (proxystylerRotateUrl) {
try {
await fetch(proxystylerRotateUrl, {
method: 'GET',
signal: AbortSignal.timeout(10_000),
});
} catch (err) {
log.warning(`ProxyStyler rotate failed: ${err.message}`);
}
}
},
failedRequestHandler({ request, log }, error) {
log.error(`${request.url} failed: ${error.message}`);
},
});
await crawler.run(startUrls);
await Actor.exit();CheerioCrawler for pure HTML (no browser)
import { CheerioCrawler, ProxyConfiguration } from 'crawlee';
import { Actor } from 'apify';
const crawler = new CheerioCrawler({
proxyConfiguration: new ProxyConfiguration({
proxyUrls: [
'http://user:pass@us.proxystyler.com:30000',
'http://user:pass@us.proxystyler.com:30001',
'http://user:pass@us.proxystyler.com:30002',
],
}),
maxRequestsPerMinute: 180, // stay polite
requestHandlerTimeoutSecs: 60,
async requestHandler({ $, request }) {
const title = $('title').text();
const prices = $('.price').map((_, el) => $(el).text()).get();
await Actor.pushData({ url: request.url, title, prices });
await Actor.charge({ eventName: 'page-opened' });
},
});
await crawler.run(startUrls);Node Runtime Requirements
- Apify SDK for JavaScript 3.x requires Node 18+; Node 20 recommended
- Crawlee 3.x ships its own fingerprint generator that pairs cleanly with mobile IPs
- Official base image:
apify/actor-node-playwright-chrome:20 - Use apify-cli to run locally:
apify run -p
LlamaIndex + Apify
LlamaIndex ships an official ApifyActor loader that runs an Apify Actor on demand and feeds its dataset into a VectorStoreIndex. Combined with a ProxyStyler-powered Actor you get a fully auditable RAG pipeline: the retrieval layer sees real mobile IPs, and LlamaIndex handles chunking, embedding, and retrieval.
# pip install llama-index apify-client
from llama_index.core import VectorStoreIndex, Document
from llama_index.readers.apify import ApifyActor
# Run the ProxyStyler-powered scraper as an Actor
reader = ApifyActor("your-username/proxystyler-powered-scraper")
documents = reader.load_data(
actor_id="your-username/proxystyler-powered-scraper",
run_input={
"startUrls": [
{"url": "https://example.com/docs"},
{"url": "https://example.com/blog"},
],
"proxystylerProxyUrls": [
"http://user:pass@us.proxystyler.com:30000",
],
},
dataset_mapping_function=lambda item: Document(
text=item.get("htmlContent") or item.get("title", ""),
metadata={
"url": item.get("url"),
"scraped_at": item.get("scrapedAt"),
"source": "proxystyler-apify",
},
),
)
# Build a vector index over the freshly scraped content
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=5)
answer = query_engine.query("What changed in the pricing section last week?")
print(answer)Why This Matters
- Fresh corpus per query. The ApifyActor loader can be invoked at query time, meaning your LLM answers from data that is seconds old instead of whatever was crawled last week.
- Proxy provenance. Every document ingested carries metadata about which ProxyStyler IP fetched it. Useful for compliance and for debugging geographical content differences.
- Cost control. Under PPE, LlamaIndex only pays for the pages it actually reads, not a flat monthly fee for an Actor it might never run.
LangChain + Apify
LangChain exposes two helpers for Apify: ApifyWrapper that triggers an Actor run and ApifyDatasetLoader that pulls results from an existing dataset ID. Both produce LangChain Document objects suitable for chains, agents, and retrievers.
# pip install langchain langchain-apify apify-client
from langchain.indexes import VectorstoreIndexCreator
from langchain_apify import ApifyWrapper
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="your-username/proxystyler-powered-scraper",
run_input={
"startUrls": [{"url": "https://news.example.com/latest"}],
"proxystylerProxyUrls": [
"http://user:pass@us.proxystyler.com:30000",
"http://user:pass@us.proxystyler.com:30001",
],
},
dataset_mapping_function=lambda item: Document(
page_content=item["htmlContent"] or item["title"],
metadata={
"url": item["url"],
"source": "proxystyler-apify",
"scraped_at": item["scrapedAt"],
},
),
)
# Vector index
index = VectorstoreIndexCreator(
embedding=OpenAIEmbeddings(model="text-embedding-3-large"),
).from_loaders([loader])
result = index.query("Summarize today's top stories.")
print(result)ApifyDatasetLoader for Pre-Existing Runs
from langchain_apify import ApifyDatasetLoader
# Already ran the Actor? Load the dataset directly.
loader = ApifyDatasetLoader(
dataset_id="AbCdEfGhIj1234567",
dataset_mapping_function=lambda x: Document(
page_content=x["html"] or "",
metadata={"url": x["url"]},
),
)
docs = loader.load()
print(f"Loaded {len(docs)} documents from ProxyStyler-scraped dataset")Production Tips
- Set
memory_mbytes=4096on long-running Actor calls to avoid OOM during large retrievals - Use
build="latest"only in dev; pin a specific build tag in production for reproducibility - Store the Apify token as a secret in LangSmith or Vault; never embed in source
- Cache Actor runs with a hash key based on startUrls to avoid re-scraping identical inputs
Building Your Own ProxyStyler-Powered Actor
The fastest path to shipping a ProxyStyler-powered Actor on the Apify Store: use the official template, wire in the ProxyStyler endpoint as an input, declare PPE events in actor.json, and push. Apify's build system compiles the Dockerfile in the cloud, so you do not need a local Docker daemon at all.
Step-by-step
# 1. Install the Apify CLI
npm install -g apify-cli
# 2. Log in (opens browser to fetch token)
apify login
# 3. Create a new Actor from template
apify create proxystyler-powered-scraper --template=python-crawlee-playwright
# 4. Move into the folder
cd proxystyler-powered-scraper
# 5. Edit .actor/actor.json to add PPE events (see earlier snippet)
# 6. Edit .actor/input_schema.json to add proxystylerProxyUrls
# 7. Edit src/main.py to use ProxyConfiguration (see Python section)
# 8. Test locally with a real ProxyStyler endpoint
apify run --purge --input='{
"startUrls": [{"url": "https://example.com"}],
"proxystylerProxyUrls": ["http://user:pass@us.proxystyler.com:30000"]
}'
# 9. Push to Apify Cloud
apify push
# 10. (Optional) Publish to the Store
# - Add screenshots, icon, category
# - Submit for review
# - Typical approval: 1-3 business daysMonetization Checklist
Must-Have for Store Publication
PPE, pay-per-result, or pay-per-usage pricing (no rental)
Clear README with at least one working example
Icon (512x512 PNG) and 3+ screenshots
Input schema with descriptions on every field
Sample dataset output (at least 5 rows)
Stripe Connect or Wise payout account
Nice-to-Have (Drives Installs)
Video demo embedded in README
Free-tier quota (e.g., 100 results free per user)
Integration examples with Zapier, Make, Sheets
Separate PPE events for expensive operations
Performance badge ("Scraped 10M+ pages in 2025")
Proxy comparison table in README
Hidden Costs to Price Against
When you bake a ProxyStyler subscription into a Store Actor, the Actor's price must cover both Apify platform fees and your ProxyStyler cost. A simple model:
- Apify compute: ~$0.40 per compute unit (1 GB * 1 hour)
- Apify takes 20% of your event revenue
- ProxyStyler flat fee per device (amortized across all users)
- Price each event at 3-5x marginal cost to leave room for support and iteration
Ship Your Apify Actor with ProxyStyler Mobile IPs
Flat-rate 4G and 5G endpoints, dedicated IPs, on-demand rotation, and per-country targeting. Drop them into any Actor's ProxyConfiguration in one line.
Configure & Buy Mobile Proxies
Select from 10+ countries with real mobile carrier IPs and flexible billing options
Choose Billing Period
Select the billing cycle that works best for you
Select Location
Carrier & Region
USA ๐บ๐ธAvailable regions:
Included Features
๐บ๐ธUSA Configuration
AT&T โข Florida โข Monthly Plan
Your price:
No commitment โข Cancel anytime โข Purchase guide
Perfect For
Popular Proxy Locations
Secure payment methods accepted: Credit Card, PayPal, Bitcoin, and more. 2 free modem replacements per 24h.
Start Integrating ProxyStyler + Apify Today
Whether you are migrating a rental Actor before October 2026 or building your first pay-per-event scraper, ProxyStyler mobile proxies plug into the Apify SDK in one line of ProxyConfiguration. Get an endpoint, copy the snippets above, and ship before the migration deadline lands.
Frequently Asked Questions
Related
Launch Playbook
/blog/start-mobile-proxy-reseller-business-2026
Bulk Pricing Math
/blog/mobile-proxy-bulk-pricing-volume-tiers
MobileProxy.space
/blog/mobileproxy-space-alternative
Localtonet
/blog/localtonet-alternative
LuxSocks (closed)
/blog/luxsocks-alternative
Pingproxies
/blog/pingproxies-alternative