Traffic Generators: Difference between revisions
m (Protected "Traffic Generators" ([Edit=Allow only logged in users] (indefinite) [Move=Allow only logged in users] (indefinite) [Delete=Allow only logged in users] (indefinite))) |
(→Curl) |
||
(19 intermediate revisions by the same user not shown) | |||
Line 23: | Line 23: | ||
httperf --server waf.avitest.com --port 80 --num-conns 100 --rate 10 --timeout 1 |
httperf --server waf.avitest.com --port 80 --num-conns 100 --rate 10 --timeout 1 |
||
= Curl |
= Curl = |
||
curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com |
curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com |
||
seq 100 | parallel -j0 curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com |
seq 100 | parallel -j0 curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com |
||
for i in `seq 1 99999`; do echo "Status Code:"; curl -s -o /dev/null -w "%{http_code}" https://10.1.1.1; sleep 1; done |
|||
;Bash Script |
|||
<pre> |
|||
# get output, append HTTP status code in separate line, discard error message |
|||
OUT=$( curl -sfI --connect-timeout 1 http://10.52.200.32/ | grep "HTTP" ) |
|||
# get exit code |
|||
RET=$? |
|||
if [[ $RET -ne 0 ]] ; then |
|||
echo "Time out: $RET" |
|||
else |
|||
echo "$OUT" |
|||
fi |
|||
</pre> |
|||
while true; do ./monitor.sh; sleep 1; done |
|||
= Apache Benchmark = |
= Apache Benchmark = |
||
Line 34: | Line 52: | ||
MaxRequestWorkers 1000 |
MaxRequestWorkers 1000 |
||
ServerLimit 565 |
ServerLimit 565 |
||
= Locust = |
|||
Installation: |
|||
sudo apt-get update |
|||
sudo apt-get -y install python-pip python-dev libxml2-dev libxslt-dev |
|||
sudo pip install locustio |
|||
Install pyzmq for tests across multiple servers to increase the testing capacity: |
|||
sudo pip install pyzmq |
|||
Set file limit to unlimited to ensure there is no OS problems with files at concurrency: |
|||
ulimit -n 9999 |
|||
== Locust File == |
|||
Single “task” which gets a specific webpage: |
|||
<syntaxhighlight lang="python"> |
|||
from locust import HttpLocust, TaskSet, task |
|||
class UserBehavior(TaskSet): |
|||
@task |
|||
def get_something(self): |
|||
self.client.get("/something") |
|||
class WebsiteUser(HttpLocust): |
|||
task_set = UserBehavior |
|||
</syntaxhighlight> |
|||
The test below logs a user in to groupster then requests what will be their home page 1 time for every 3 times it requests a group page: |
|||
<syntaxhighlight lang="python"> |
|||
from locust import HttpLocust, TaskSet |
|||
def login(l): |
|||
l.client.post("/login/process", {"email":"me@someemail.com", "pass":"password"}) |
|||
def index(l): |
|||
l.client.get("/") |
|||
def group(l): |
|||
l.client.get("/group/1/") |
|||
class UserBehavior(TaskSet): |
|||
tasks = {index:1, group:3} |
|||
def on_start(self): |
|||
login(self) |
|||
class WebsiteUser(HttpLocust): |
|||
task_set = UserBehavior |
|||
min_wait=5000 |
|||
max_wait=9000 |
|||
</syntaxhighlight> |
|||
== Executing == |
|||
* Go to the directory with locust file & execute below command: |
|||
* This only starts the interface to control the tests it does not actually start a test: |
|||
locust -f ./locastfile.py --host=http://somedomain.io --master |
|||
Go to the web interface to control the test variables and to start and stop it. |
|||
http://localhost:8089 |
|||
== Distributed Mode == |
|||
*To generate significant load it is almost always necessary to run it in distributed mode. |
|||
*Master server do not simulate any user itself. |
|||
*For this we have to start one or more slaves using –slave flag |
|||
*Both the master and each slave machine, must have a copy of the locust test scripts when running Locust distributed. |
|||
*Run below command on each slave nodes as well |
|||
ulimit -n 9999 |
|||
Start “master” node: |
|||
locust --host=http://somedomain.io --master |
|||
Then start any “slave” nodes, giving them a reference to the master node: |
|||
locust --host=http://somedomain.io --slave --master-host=192.168.10.100 |
|||
== Without WebUI == |
|||
Runs the cli mode as below: |
|||
locust -f locustfile.py --no-web -c 1000 -r 100 --run-time 1h30m |
|||
In Distributed mode it will wait until all slave nodes have connected before starting the test: |
|||
locust -f locustfile.py --no-web -c 1000 -r 100 --run-time 1h30m --expect-slaves |
|||
= Slowhttptest = |
|||
Source: [http://manpages.ubuntu.com/manpages/xenial/man1/slowhttptest.1.html ubuntu.com] |
|||
{{UC}} |
|||
= Bombardier = |
|||
Source: [https://softwaretester.info/http-benchmarking-with-bombardier/ softwaretester.info] |
|||
Preparation |
|||
sudo apt install -y curl git |
|||
curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz |
|||
tar xvf go1.8.linux-amd64.tar.gz |
|||
sudo chown -R root:root go |
|||
sudo mv go /usr/local/ |
|||
Configure go (for user) |
|||
mkdir ~/.go |
|||
echo "GOPATH=$HOME/.go" >> ~/.bashrc |
|||
echo "export GOPATH" >> ~/.bashrc |
|||
echo "PATH=\$PATH:/usr/local/go/bin:\$GOPATH/bin" >> ~/.bashrc |
|||
source ~/.bashrc |
|||
go version |
|||
Install bombardier |
|||
go get -u github.com/codesenberg/bombardier |
|||
bombardier --help |
|||
Usage: |
|||
*Run with 10 connections on 5 sec and show latency statistics. |
|||
bombardier -d 5s -c 10 -l -k https://www.heise.de |
|||
*Test for 60 seconds at 250 Insecure connections per second : |
|||
bombardier -c250 -k -d 60s "https://10.70.28.12/tools/healthcheck?api-key=0145463c-df61-4ebe-bbf1-ed3f43t4f57" |
|||
<br /> |
<br /> |
Latest revision as of 21:36, 9 July 2019
Siege
Siege is an open source stress / regression test and benchmark utility.
siege -c100 -t30S -d10 -b -v aman.info.tm
Some Extensions
--header="Cookie: SESSb43b2d1d084de3872c89b0b125b64564=Jafuk06rppYAXIxWaU0LY2VmqxN997DsKU3BSgfArCM" -f /path/to/some-urls.txt
The some-urls.txt file is just a simple list of URLs on newlines:
http://www.mywebsite.com/about-us http://www.mywebsite.com/contact-us
Httperf
Installation:
sudo apt install httperf
Usage:
httperf --server waf.avitest.com --port 80 --num-conns 100 --rate 10 --timeout 1
Curl
curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com seq 100 | parallel -j0 curl -s -o /dev/null -w "%{http_code}" http://waf.avitest.com
for i in `seq 1 99999`; do echo "Status Code:"; curl -s -o /dev/null -w "%{http_code}" https://10.1.1.1; sleep 1; done
- Bash Script
# get output, append HTTP status code in separate line, discard error message OUT=$( curl -sfI --connect-timeout 1 http://10.52.200.32/ | grep "HTTP" ) # get exit code RET=$? if [[ $RET -ne 0 ]] ; then echo "Time out: $RET" else echo "$OUT" fi
while true; do ./monitor.sh; sleep 1; done
Apache Benchmark
ab -t 1 -n 1000 -c 300 http://waf.avitest.com/
Increase Apache MaxClients (now called MaxRequestWorkers in new versions) on Backend Server:
sudo nano /etc/apache2/mods-available/mpm_event.conf MaxRequestWorkers 1000 ServerLimit 565
Locust
Installation:
sudo apt-get update sudo apt-get -y install python-pip python-dev libxml2-dev libxslt-dev sudo pip install locustio
Install pyzmq for tests across multiple servers to increase the testing capacity:
sudo pip install pyzmq
Set file limit to unlimited to ensure there is no OS problems with files at concurrency:
ulimit -n 9999
Locust File
Single “task” which gets a specific webpage:
from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
@task
def get_something(self):
self.client.get("/something")
class WebsiteUser(HttpLocust):
task_set = UserBehavior
The test below logs a user in to groupster then requests what will be their home page 1 time for every 3 times it requests a group page:
from locust import HttpLocust, TaskSet
def login(l):
l.client.post("/login/process", {"email":"me@someemail.com", "pass":"password"})
def index(l):
l.client.get("/")
def group(l):
l.client.get("/group/1/")
class UserBehavior(TaskSet):
tasks = {index:1, group:3}
def on_start(self):
login(self)
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait=5000
max_wait=9000
Executing
- Go to the directory with locust file & execute below command:
- This only starts the interface to control the tests it does not actually start a test:
locust -f ./locastfile.py --host=http://somedomain.io --master
Go to the web interface to control the test variables and to start and stop it.
http://localhost:8089
Distributed Mode
- To generate significant load it is almost always necessary to run it in distributed mode.
- Master server do not simulate any user itself.
- For this we have to start one or more slaves using –slave flag
- Both the master and each slave machine, must have a copy of the locust test scripts when running Locust distributed.
- Run below command on each slave nodes as well
ulimit -n 9999
Start “master” node:
locust --host=http://somedomain.io --master
Then start any “slave” nodes, giving them a reference to the master node:
locust --host=http://somedomain.io --slave --master-host=192.168.10.100
Without WebUI
Runs the cli mode as below:
locust -f locustfile.py --no-web -c 1000 -r 100 --run-time 1h30m
In Distributed mode it will wait until all slave nodes have connected before starting the test:
locust -f locustfile.py --no-web -c 1000 -r 100 --run-time 1h30m --expect-slaves
Slowhttptest
Source: ubuntu.com
This section is under construction. |
Bombardier
Source: softwaretester.info
Preparation
sudo apt install -y curl git curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz tar xvf go1.8.linux-amd64.tar.gz sudo chown -R root:root go sudo mv go /usr/local/
Configure go (for user)
mkdir ~/.go echo "GOPATH=$HOME/.go" >> ~/.bashrc echo "export GOPATH" >> ~/.bashrc echo "PATH=\$PATH:/usr/local/go/bin:\$GOPATH/bin" >> ~/.bashrc
source ~/.bashrc go version
Install bombardier
go get -u github.com/codesenberg/bombardier bombardier --help
Usage:
- Run with 10 connections on 5 sec and show latency statistics.
bombardier -d 5s -c 10 -l -k https://www.heise.de
- Test for 60 seconds at 250 Insecure connections per second :
bombardier -c250 -k -d 60s "https://10.70.28.12/tools/healthcheck?api-key=0145463c-df61-4ebe-bbf1-ed3f43t4f57"
- References
{{#widget:DISQUS
|id=networkm
|uniqid=Traffic Generators
|url=https://aman.awiki.org/wiki/Traffic_Generators
}}