Skip to content

Commit 880a2f6

Browse files
pythonJaRvispythonJaRvis
authored andcommitted
commit
1 parent f54ec8a commit 880a2f6

File tree

1 file changed

+116
-20
lines changed

1 file changed

+116
-20
lines changed

README.md

Lines changed: 116 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,127 @@
1-
# Jarvis
1+
## Dataset and Ground truth
22

3-
_Callgraph generator for Python_
3+
The micro-benchmark and macro-benchmark is provide in `dataset` and `ground_truth` directory.
44

5-
## Installation
5+
## Getting Jarvis to run
66

7-
1. Install [Poetry](https://python-poetry.org/)
8-
2. Make sure Poetry is added to your PATH
9-
3. Have Poetry create the virtual environment in the root of the project by setting:
10-
`poetry config virtualenvs.in-project true`
11-
4. Install the dependencies with `poetry install`
7+
Prerequisites:
128

13-
## Running Jarvis
9+
* Python = 3.8
10+
* PyCG: tool/PyCG
11+
* Jarvis: tool/Jarvis
1412

15-
1. Run the virtual environment with `poetry shell`
16-
2. Template command to run Jarvis: `python main.py <options> <entry_point_files>`.
17-
Example usages:
18-
- `python main.py --package ./. main.py`
19-
- `python main.py --package "../groundTruth/micro-benchmark/snippets/lists/comprehension_if" "../groundTruth/micro-benchmark/snippets/lists/comprehension_if/main.py"` _(for `--package` option you'd like to use the path to the folder of your entry point file(s))_
13+
run `jarvis_cli.py`.
2014

21-
## Running micro-benchmarks / tests
15+
Jarvis usage:
2216

23-
Change into the directory with: `cd groundTruth/micro-benchmark`
17+
```bash
18+
$ python3 tool/Jarvis/jarvis_cli.py [module_path1 module_path2 module_path3...] [--package] [--decy] [-o output_path]
19+
```
2420

25-
**Run all tests:**
21+
Jarvis help:
2622

27-
`python -m unittest discover -p "*_test.py"`
23+
```bash
24+
$ python3 tool/Jarvis/jarvis_cli.py -h
25+
usage: jarvis_cli.py [-h] [--package PACKAGE] [--decy] [--precision]
26+
[--moduleEntry [MODULEENTRY ...]]
27+
[--operation {call-graph,key-error}] [-o OUTPUT]
28+
[module ...]
2829

29-
**Run single test:**
30+
positional arguments:
31+
module modules to be processed, which are also 'Demands' in D.W. mode
3032

31-
`python -m unittest args_test.py`
33+
options:
34+
-h, --help show this help message and exit
35+
--package PACKAGE Package containing the code to be analyzed
36+
--decy whether analyze the dependencies
37+
--precision whether flow-sensitive
38+
--entry-point [MODULEENTRY ...]
39+
Entry functions to be processed
40+
-o OUTPUT, --output OUTPUT
41+
Output call graph path
42+
```
43+
44+
*Example 1:* analyze bpytop.py in E.A. mode.
45+
46+
```bash
47+
$ python3 tool/Jarvis/jarvis_cli.py dataset/macro_benchmark/pj/bpytop/bpytop.py --package dataset/macro_benchmark/pj/bpytop -o jarvis.json
48+
```
49+
50+
*Example 2:* analyze bpytop.py in D.W. mode. Note we should prepare all the dependencies in the virtual environment.
51+
52+
```bash
53+
# create virtualenv environment
54+
$ virtualenv venv python=python3.8
55+
# install Dependencies in virtualenv environment
56+
$ python3 -m pip install psutil
57+
# run jarvis
58+
$ python3 tool/Jarvis/jarvis_cli.py dataset/macro_benchmark/pj/bpytop/bpytop.py --package dataset/macro_benchmark/pj/bpytop --decy -o jarvis.json
59+
```
60+
61+
62+
63+
64+
## Evaluation
65+
66+
### RQ1 and RQ2 Setup
67+
68+
cd to the root directory of the unzipped files.
69+
70+
```bash
71+
# 1. run micro_benchmark
72+
$ ./reproducing_RQ12_setup/micro_benchmark/test_All.sh
73+
# 2. run macro_benchmark
74+
$ ./reproducing_RQ12_setup/macro_benchmark/pycg_EA.sh
75+
# PyCG iterates once
76+
$ ./reproducing_RQ12_setup/macro_benchmark/pycg_EW.sh 1
77+
# PyCG iterates twice
78+
$ ./reproducing_RQ12_setup/macro_benchmark/pycg_EW.sh 2
79+
# PyCG iterates to convergence
80+
$ ./reproducing_RQ12_setup/macro_benchmark/pycg_EW.sh
81+
$ ./reproducing_RQ12_setup/macro_benchmark/jarvis_DA.sh
82+
$ ./reproducing_RQ12_setup/macro_benchmark/jarvis_EA.sh
83+
$ ./reproducing_RQ12_setup/macro_benchmark/jarvis_DW.sh
84+
```
85+
86+
### RQ1. Scalability Evaluation
87+
88+
89+
#### Scalability results
90+
91+
Run
92+
93+
```bash
94+
$ python3 ./reproducing_RQ1/gen_table.py
95+
```
96+
97+
The results are shown below:
98+
99+
![scalability](Jarvis/reproducing_RQ1/scalability.png)
100+
101+
#### AGs and FAGs
102+
103+
Run
104+
105+
```shell
106+
$ pip3 install matplotlib
107+
$ pip3 install numpy
108+
$ python3 ./reproducing_RQ1/FAG/plot.py
109+
```
110+
111+
The generated graphs are `pycg-ag.pdf`, `pycg-change-ag.pdf` and `jarvis-fag.pdf`, where they represents Fig. 9a, Fig. 9b and Fig 10, correspondingly.
112+
113+
114+
115+
### RQ2. Accuracy Evaluation
116+
117+
#### Accuracy results
118+
119+
Run
120+
121+
```bash
122+
$ python3 ./reproducing_RQ2/gen_table.py
123+
```
124+
125+
The generated results:
126+
127+
![accuracy](Jarvis/reproducing_RQ2/accuracy.png)

0 commit comments

Comments
 (0)