Why Custom Policy Groups Matter More Than Raw Rules
Most beginners think of Clash as a long list of rules: lines that send traffic to DIRECT or a single PROXY group. That works until your life does not fit one default path. You might want corporate SaaS on a stable node, video sites on a bandwidth-friendly server, domestic banking domains untouched, and game UDP on the lowest-latency hop. None of that is magic: it is just policy groups—named decision points your rules reference instead of hard-coding a single outbound every time.
A policy group (the proxy-groups section in YAML) is the control surface. The rules section is the routing table that picks which group (or direct) each connection uses. When both are designed together, you get predictable behavior: the rule layer answers “what kind of traffic is this?” and the group layer answers “given that label, which node or strategy should we use right now?”
If you are new to YAML structure, skim our documentation hub first, then return here for routing depth. For protocol context that affects which outbounds you can place inside groups, see the protocols overview.
The Rule Pipeline: First Match Wins
Clash evaluates rules: from top to bottom. The first line that matches the current connection stops the search. There is no “best match” or weighted score—only order. That single fact explains half of the misroutes people complain about: a broad rule above a specific one silently steals traffic.
Each rule line maps to a policy: a built-in keyword like DIRECT or REJECT, or the name of one of your proxy-groups / proxies. Naming groups clearly—WORK_STABLE, MEDIA_AUTO, GAME_LOWLAT—makes logs readable and reduces copy-paste errors when you reorganize YAML.
Proxy-Group Types You Will Actually Use
Modern Clash derivatives powered by the Mihomo core support several type values. Four cover almost every real scenario:
select— manual choice in the UI. Best for “I want to pin a node for now” or expose nested groups as a flat menu.url-test— periodic latency checks against a test URL; picks the fastest member. Ideal for automatic resilience when you do not care which specific node wins.load-balance— spreads connections across members (algorithm depends on options). Useful for heavy download workloads when your provider allows it.relay— chains nodes in sequence. Advanced and higher latency; use only when you understand the trade-offs.
url-test and load-balance both depend on sane url and interval settings. If the test URL is blocked or slow from your network, the “best” node selection becomes noise. Many users point the test to a lightweight HTTPS endpoint that responds consistently worldwide.
Nesting Groups Without Losing Your Mind
A group’s proxies: list can include other group names, not just raw outbounds. That lets you build a hierarchy: a top-level FINAL select that offers INTELLIGENT_AUTO (a url-test), MANUAL_PICK (another select), and DIRECT. Nested structures keep the daily UI small while still exposing advanced fallbacks when something breaks.
Keep nesting shallow unless you enjoy YAML archaeology. Three layers is usually enough: scenario group → strategy group → concrete nodes.
Sticky Sessions and Churn in Automatic Groups
Automatic groups feel magical until nodes flap. A url-test group re-evaluates members on an interval; if the winner changes constantly, long-lived connections may hop in ways that break login sessions or trigger bot checks on sensitive sites. Raising tolerance (milliseconds) tells the core to stick with the current choice unless a rival is meaningfully faster, which smooths day-to-day browsing. Separately, some cores expose options related to lazy activation or sticky outbound behavior—read your client’s release notes, because naming differs across forks.
If you need hard session affinity for a handful of domains, route those domains to a manual select group instead of an auto tester. The extra click in the UI is cheaper than debugging OAuth loops that only appear when the fastest node rotates.
RULE-SET and Rule Providers: Scale Without a 10,000-Line File
Maintaining giant inline rule lists is brittle. Rule providers download or load external rule files (often community-maintained domain or IP lists) and expose them as named sets you reference with RULE-SET,name,policy. Providers support refresh intervals so your ad-blocking, streaming, or region lists stay current without hand-editing every week.
When you combine providers with local exceptions, put the exceptions above the RULE-SET lines. For example, force a single finance domain to DIRECT before a broad “global sites” set sends it through a proxy. That pattern—exceptions upward, bulk lists downward—is how you keep precision without sacrificing coverage.
rule-providers:
streaming:
type: http
behavior: classical
url: "https://example.com/rules/streaming.yaml"
path: ./ruleset/streaming.yaml
interval: 86400
rules:
- DOMAIN-SUFFIX,corp.internal,DIRECT
- RULE-SET,streaming,MEDIA_AUTO
- GEOIP,CN,DIRECT
- MATCH,FINAL
Replace the sample URL with a source you trust; verify licensing and update frequency. The important lesson is structural: providers are ordinary citizens in the first-match pipeline, so their placement is a policy decision, not an afterthought.
Precision Knobs: Domain, IP, GEOIP, and Process
Beyond RULE-SET, Clash offers fine-grained matchers. Common ones include DOMAIN, DOMAIN-SUFFIX, DOMAIN-KEYWORD (use sparingly—easy to over-match), IP-CIDR, and GEOIP. On desktop Mihomo builds you may also have PROCESS-NAME (or related) matchers to tie a binary name to a policy—powerful for splitting a browser from a game without listing every domain the game touches.
GEOIP,CN,DIRECT remains the classic split-tunnel backbone for Chinese users: domestic traffic stays local, everything else defers to later rules or the final policy. If you operate in a different region, swap the ISO code and verify your GeoIP database is current in the client you use; stale databases mis-label IPs and silently defeat your intent.
DNS, fake-ip, and Why Rules Still See a “Strange” Address
Under enhanced-mode: fake-ip, applications often connect to synthetic addresses in a reserved range. Clash maps those flows back to domain intent internally, so your DOMAIN rules still behave—but IP-based rules may not trigger the way you expect until real resolution happens at the outbound. If a domain rule “should” match yet does not, check whether you accidentally relied on IP rules that only see the fake segment, or whether DNS and routing are split across interfaces when TUN mode is off.
For a full stack walkthrough of TUN, DNS hijacking, and leak prevention, read the dedicated TUN mode guide after you finish tuning rules here. The two topics are inseparable in production setups.
Worked Example: Building Three “Life Mode” Groups
Imagine three outbound strategies: WORK_STABLE (select among three low-churn nodes), MEDIA_AUTO (url-test among streaming-tolerant nodes), and DEFAULT_AUTO (url-test among everything else). Your rules send SaaS domains to WORK_STABLE, streaming RULE-SET to MEDIA_AUTO, LAN and CN traffic to DIRECT, and the remainder to DEFAULT_AUTO.
proxy-groups:
- name: WORK_STABLE
type: select
proxies:
- node-w1
- node-w2
- node-w3
- DIRECT
- name: MEDIA_AUTO
type: url-test
url: https://www.gstatic.com/generate_204
interval: 300
tolerance: 50
proxies:
- media-a
- media-b
- media-c
- name: DEFAULT_AUTO
type: url-test
url: https://www.gstatic.com/generate_204
interval: 300
tolerance: 80
proxies:
- hop-1
- hop-2
- hop-3
- name: FINAL
type: select
proxies:
- DEFAULT_AUTO
- WORK_STABLE
- DIRECT
rules:
- IP-CIDR,192.168.0.0/16,DIRECT
- IP-CIDR,10.0.0.0/8,DIRECT
- DOMAIN-SUFFIX,internal.corp,WORK_STABLE
- RULE-SET,streaming,MEDIA_AUTO
- GEOIP,CN,DIRECT
- MATCH,FINAL
This skeleton is not copy-paste gospel—your node names, provider lists, and tolerance values should reflect reality. The takeaway is the separation of concerns: business traffic never competes with media auto-selection, and you still have a human override via FINAL.
Debugging Misroutes Without Burning an Afternoon
When a site loads from the wrong region or latency spikes, work through a short checklist:
- Confirm the winning rule — use the client’s connection or log view to see which rule name matched. If the UI shows an unexpected policy, your ordering—not the group—is wrong.
- Validate DNS path — ensure queries hit Clash’s DNS listener when fake-ip is enabled; split DNS is a frequent leak that looks like a “bad rule.”
- Revisit GEOIP and RULE-SET breadth — a domain can be miscategorized if lists overlap; narrow lists win only if placed earlier.
- Test the group itself — swap a
url-testgroup to a manualselecttemporarily to see if automation is masking a dead node.
REJECT for “security” breaks legitimate edge cases. Prefer a measured MATCH to a conservative auto group, then tighten exceptions as you observe real traffic.
Putting It Together
Custom policy groups turn Clash from a static switch into a traffic-aware router. Rules provide the taxonomy; groups provide the strategy. Respect first-match ordering, isolate scenarios into named groups, externalize bulky lists with providers, and treat DNS mode as part of the same design—not an appendix.
Compared with one-size-fits-all clients that hide routing behind opaque toggles, a well-structured Clash profile ages better: you can reason about it months later, diff it in Git, and share snippets with teammates without relearning magic defaults.
When you are ready to run the stack on real hardware, grab an up-to-date build for your platform and apply what you configured—most graphical clients surface group selectors and connection logs so you can verify each hop visually.