Choose a title
Pick Valorant or CS2 to load the correct sensitivity semantics.
Advertisement · 728×90
pro-settings-top-banner
Kenny Schrub · Retired · France
kennyS redefined what AWPing could look like in CS — his snap-flick style influenced an entire generation of AWPers and made him a legend of the game.
eDPI
880
DPI × Sensitivity
DPI
400
Sensitivity
2.2
CS2 in-game
Refresh Rate
240 Hz
cm/360°
10.3
Physical wrist movement
Advertisement · 300×250
pro-settings-mid-rectangle
Mouse
HyperX Pulsefire Haste
400 DPI
Monitor
BenQ ZOWIE XL2540K
240 Hz · 1920×1080
CSGO-PVxtS-rKBhm-zNKLH-WJmTR-OwmaFPaste in CS2 → Settings → Game → Crosshair → Share or Import → Import.
Convert sens 2.2 (CS2) to Valorant, CS2, Apex, Overwatch 2 and more — instantly.
Advertisement · 728×90
pro-settings-before-similar
What sensitivity does kennyS use?
kennyS plays CS2 at 2.2 in-game sensitivity with 400 DPI, giving an eDPI of 880. In physical terms, this is 10.3 cm per 360° rotation.
What is kennyS's eDPI?
kennyS's eDPI (Effective DPI) is 880, calculated as 400 DPI × 2.2 in-game sensitivity. eDPI lets you compare sensitivities across players regardless of DPI setting.
What mouse does kennyS use?
kennyS uses the HyperX Pulsefire Haste. They play at 400 DPI.
What monitor does kennyS use?
kennyS uses the BenQ ZOWIE XL2540K running at 240 Hz with a 1920×1080 resolution.
How do I copy kennyS's crosshair in CS2?
Copy the code above and paste it in CS2 → Settings → Game → Crosshair → Share or Import → Import.
Pro Tip
Filter by game first—Valorant sensitivity does not translate 1:1 to CS2 without cm/360° conversion.
Pick Valorant or CS2 to load the correct sensitivity semantics.
Compare eDPI, resolution, and peripheral choices side by side.
Test adjustments in aim trainers before ranked sessions.
Pro Player Settings Database is structured so you can move from inputs to defensible outputs without hunting for hidden options. Step 1 (“Choose a title”): Pick Valorant or CS2 to load the correct sensitivity semantics. Step 2 (“Browse pros”): Compare eDPI, resolution, and peripheral choices side by side. Step 3 (“Copy responsibly”): Test adjustments in aim trainers before ranked sessions. Following that sequence reduces rounding drift: you lock the scenario first, then layer refinements (tax mode, compounding frequency, activity tier, or niche multiplier) only after baseline numbers look sensible. When you revisit a calculation weeks later, the same order of operations makes spreadsheets and screenshots easier to reconcile with what the UI showed.
Muscle memory rewards narrow eDPI windows; drastic changes destroy flick consistency until retrained.
Hardware parity matters: identical settings on different mice or pads still feel different due to sensor LOD and glide speed.
Revisit Pro Player Settings Database whenever baseline assumptions shift—rates, calendars, population denominators, or hardware targets. The numbers you export today become the audit trail that makes tomorrow’s decision defensible to teammates, clients, or regulators reviewing your methodology.
Competitive FPS performance is a stack of human factors, display timing, and settings you can actually sustain across thousands of repetitions. Crosshair codes encode color, thickness, outlines, and center dot behavior; what reads cleanly on Mirage may wash out on Icebox or Nuke. Sensitivity math reduces to a measurable cm/360°, yet muscle memory still prefers whatever you have rehearsed for seasons. Frame-time and monitor latency tools help you reason about end-to-end click-to-photon delay, but real-world variance from fullscreen optimizations, Reflex, and driver settings will diverge slightly from any single formula. Treat pro settings as structured experiments: change one variable at a time, log outcomes in aim trainers or scrims, and revert when something feels worse under pressure.
Seasoned users pair the in-app insight—“Filter by game first—Valorant sensitivity does not translate 1:1 to CS2 without cm/360° conversion.”—with external checks specific to their industry. For Pro Player Settings Database, treat that guidance as a hypothesis: note the assumption, measure the delta against real-world data you trust, and update defaults when your own history disagrees with generic benchmarks. Documenting those adjustments is what turns a quick answer into a repeatable workflow your team can audit.
Three adjacent tools from the same workflow—open in a new tab mentally, same privacy model here.
We aggregate public interviews and streams; always confirm on live broadcasts because pros experiment often.
Use feedback channels; roster updates depend on public source availability.
It changes visual target size; some players compensate with ADS multipliers—convert using cm/360°, not arbitrary feel.