Bump google.golang.org/api
This commit is contained in:
22
vendor/github.com/google/pprof/.github/ISSUE_TEMPLATE.md
generated
vendored
Normal file
22
vendor/github.com/google/pprof/.github/ISSUE_TEMPLATE.md
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
Please answer these questions before submitting your issue. Thanks!
|
||||
|
||||
### What version of pprof are you using?
|
||||
|
||||
If you are using pprof via `go tool pprof`, what's your `go env` output?
|
||||
If you run pprof from GitHub, what's the Git revision?
|
||||
|
||||
|
||||
### What operating system and processor architecture are you using?
|
||||
|
||||
|
||||
### What did you do?
|
||||
|
||||
If possible, provide a recipe for reproducing the error.
|
||||
Attaching a profile you are trying to analyze is good.
|
||||
|
||||
|
||||
### What did you expect to see?
|
||||
|
||||
|
||||
### What did you see instead?
|
||||
|
8
vendor/github.com/google/pprof/.gitignore
generated
vendored
Normal file
8
vendor/github.com/google/pprof/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
.DS_Store
|
||||
*~
|
||||
*.orig
|
||||
*.exe
|
||||
.*.swp
|
||||
core
|
||||
coverage.txt
|
||||
pprof
|
61
vendor/github.com/google/pprof/.travis.yml
generated
vendored
Normal file
61
vendor/github.com/google/pprof/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
language: go
|
||||
|
||||
go_import_path: github.com/google/pprof
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- os: linux
|
||||
go: 1.10.x
|
||||
- os: linux
|
||||
go: 1.11.x
|
||||
- os: linux
|
||||
go: master
|
||||
- os: osx
|
||||
osx_image: xcode8.3
|
||||
go: 1.10.x
|
||||
- os: osx
|
||||
osx_image: xcode8.3
|
||||
go: 1.11.x
|
||||
- os: osx
|
||||
osx_image: xcode8.3
|
||||
go: master
|
||||
- os: osx
|
||||
osx_image: xcode9.4
|
||||
go: 1.10.x
|
||||
- os: osx
|
||||
osx_image: xcode9.4
|
||||
go: 1.11.x
|
||||
- os: osx
|
||||
osx_image: xcode9.4
|
||||
go: master
|
||||
- os: osx
|
||||
osx_image: xcode10.1
|
||||
go: 1.10.x
|
||||
- os: osx
|
||||
osx_image: xcode10.1
|
||||
go: 1.11.x
|
||||
- os: osx
|
||||
osx_image: xcode10.1
|
||||
go: master
|
||||
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- graphviz
|
||||
homebrew:
|
||||
packages:
|
||||
- graphviz
|
||||
update: true
|
||||
|
||||
before_install:
|
||||
- go get -u golang.org/x/lint/golint honnef.co/go/tools/cmd/...
|
||||
|
||||
script:
|
||||
- gofmtdiff=$(gofmt -s -d .) && if [ -n "$gofmtdiff" ]; then printf 'gofmt -s found:\n%s\n' "$gofmtdiff" && exit 1; fi
|
||||
- golintlint=$(golint ./...) && if [ -n "$golintlint" ]; then printf 'golint found:\n%s\n' "$golintlint" && exit 1; fi
|
||||
- go vet -all ./...
|
||||
- gosimple ./...
|
||||
- ./test.sh
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
7
vendor/github.com/google/pprof/AUTHORS
generated
vendored
Normal file
7
vendor/github.com/google/pprof/AUTHORS
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
# This is the official list of pprof authors for copyright purposes.
|
||||
# This file is distinct from the CONTRIBUTORS files.
|
||||
# See the latter for an explanation.
|
||||
# Names should be added to this file as:
|
||||
# Name or Organization <email address>
|
||||
# The email address is not required for organizations.
|
||||
Google Inc.
|
77
vendor/github.com/google/pprof/CONTRIBUTING.md
generated
vendored
Normal file
77
vendor/github.com/google/pprof/CONTRIBUTING.md
generated
vendored
Normal file
@@ -0,0 +1,77 @@
|
||||
Want to contribute? Great: read the page (including the small print at the end).
|
||||
|
||||
# Before you contribute
|
||||
|
||||
As an individual, sign the [Google Individual Contributor License
|
||||
Agreement](https://cla.developers.google.com/about/google-individual) (CLA)
|
||||
online. This is required for any of your code to be accepted.
|
||||
|
||||
Before you start working on a larger contribution, get in touch with us first
|
||||
through the issue tracker with your idea so that we can help out and possibly
|
||||
guide you. Coordinating up front makes it much easier to avoid frustration later
|
||||
on.
|
||||
|
||||
# Development
|
||||
|
||||
Make sure `GOPATH` is set in your current shell. The common way is to have
|
||||
something like `export GOPATH=$HOME/gocode` in your `.bashrc` file so that it's
|
||||
automatically set in all console sessions.
|
||||
|
||||
To get the source code, run
|
||||
|
||||
```
|
||||
go get github.com/google/pprof
|
||||
```
|
||||
|
||||
To run the tests, do
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/google/pprof
|
||||
go test -v ./...
|
||||
```
|
||||
|
||||
When you wish to work with your own fork of the source (which is required to be
|
||||
able to create a pull request), you'll want to get your fork repo as another Git
|
||||
remote in the same `github.com/google/pprof` directory. Otherwise, if you'll `go
|
||||
get` your fork directly, you'll be getting errors like `use of internal package
|
||||
not allowed` when running tests. To set up the remote do something like
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/google/pprof
|
||||
git remote add aalexand git@github.com:aalexand/pprof.git
|
||||
git fetch aalexand
|
||||
git checkout -b my-new-feature
|
||||
# hack hack hack
|
||||
go test -v ./...
|
||||
git commit -a -m "Add new feature."
|
||||
git push aalexand
|
||||
```
|
||||
|
||||
where `aalexand` is your GitHub user ID. Then proceed to the GitHub UI to send a
|
||||
code review.
|
||||
|
||||
# Code reviews
|
||||
|
||||
All submissions, including submissions by project members, require review.
|
||||
We use GitHub pull requests for this purpose.
|
||||
|
||||
The pprof source code is in Go with a bit of JavaScript, CSS and HTML. If you
|
||||
are new to Go, read [Effective Go](https://golang.org/doc/effective_go.html) and
|
||||
the [summary on typical comments during Go code
|
||||
reviews](https://github.com/golang/go/wiki/CodeReviewComments).
|
||||
|
||||
Cover all new functionality with tests. Enable Travis on your forked repo,
|
||||
enable builds of branches and make sure Travis is happily green for the branch
|
||||
with your changes.
|
||||
|
||||
The code coverage is measured for each pull request. The code coverage is
|
||||
expected to go up with every change.
|
||||
|
||||
Pull requests not meeting the above guidelines will get less attention than good
|
||||
ones, so make sure your submissions are high quality.
|
||||
|
||||
# The small print
|
||||
|
||||
Contributions made by corporations are covered by a different agreement than the
|
||||
one above, the [Software Grant and Corporate Contributor License
|
||||
Agreement](https://cla.developers.google.com/about/google-corporate).
|
16
vendor/github.com/google/pprof/CONTRIBUTORS
generated
vendored
Normal file
16
vendor/github.com/google/pprof/CONTRIBUTORS
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
# People who have agreed to one of the CLAs and can contribute patches.
|
||||
# The AUTHORS file lists the copyright holders; this file
|
||||
# lists people. For example, Google employees are listed here
|
||||
# but not in AUTHORS, because Google holds the copyright.
|
||||
#
|
||||
# https://developers.google.com/open-source/cla/individual
|
||||
# https://developers.google.com/open-source/cla/corporate
|
||||
#
|
||||
# Names should be added to this file as:
|
||||
# Name <email address>
|
||||
Raul Silvera <rsilvera@google.com>
|
||||
Tipp Moseley <tipp@google.com>
|
||||
Hyoun Kyu Cho <netforce@google.com>
|
||||
Martin Spier <spiermar@gmail.com>
|
||||
Taco de Wolff <tacodewolff@gmail.com>
|
||||
Andrew Hunter <andrewhhunter@gmail.com>
|
202
vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
202
vendor/github.com/google/pprof/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
109
vendor/github.com/google/pprof/README.md
generated
vendored
Normal file
109
vendor/github.com/google/pprof/README.md
generated
vendored
Normal file
@@ -0,0 +1,109 @@
|
||||
[](https://travis-ci.org/google/pprof)
|
||||
[](https://codecov.io/gh/google/pprof)
|
||||
|
||||
# Introduction
|
||||
|
||||
pprof is a tool for visualization and analysis of profiling data.
|
||||
|
||||
pprof reads a collection of profiling samples in profile.proto format and
|
||||
generates reports to visualize and help analyze the data. It can generate both
|
||||
text and graphical reports (through the use of the dot visualization package).
|
||||
|
||||
profile.proto is a protocol buffer that describes a set of callstacks
|
||||
and symbolization information. A common usage is to represent a set of
|
||||
sampled callstacks from statistical profiling. The format is
|
||||
described on the [proto/profile.proto](./proto/profile.proto) file. For details on protocol
|
||||
buffers, see https://developers.google.com/protocol-buffers
|
||||
|
||||
Profiles can be read from a local file, or over http. Multiple
|
||||
profiles of the same type can be aggregated or compared.
|
||||
|
||||
If the profile samples contain machine addresses, pprof can symbolize
|
||||
them through the use of the native binutils tools (addr2line and nm).
|
||||
|
||||
**This is not an official Google product.**
|
||||
|
||||
# Building pprof
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- Go development kit of a [supported version](https://golang.org/doc/devel/release.html#policy).
|
||||
Follow [these instructions](http://golang.org/doc/code.html) to install the
|
||||
go tool and set up GOPATH.
|
||||
|
||||
- Graphviz: http://www.graphviz.org/
|
||||
Optional, used to generate graphic visualizations of profiles
|
||||
|
||||
To build and install it, use the `go get` tool.
|
||||
|
||||
go get -u github.com/google/pprof
|
||||
|
||||
Remember to set GOPATH to the directory where you want pprof to be
|
||||
installed. The binary will be in `$GOPATH/bin` and the sources under
|
||||
`$GOPATH/src/github.com/google/pprof`.
|
||||
|
||||
# Basic usage
|
||||
|
||||
pprof can read a profile from a file or directly from a server via http.
|
||||
Specify the profile input(s) in the command line, and use options to
|
||||
indicate how to format the report.
|
||||
|
||||
## Generate a text report of the profile, sorted by hotness:
|
||||
|
||||
```
|
||||
% pprof -top [main_binary] profile.pb.gz
|
||||
Where
|
||||
main_binary: Local path to the main program binary, to enable symbolization
|
||||
profile.pb.gz: Local path to the profile in a compressed protobuf, or
|
||||
URL to the http service that serves a profile.
|
||||
```
|
||||
|
||||
## Generate a graph in an SVG file, and open it with a web browser:
|
||||
|
||||
```
|
||||
pprof -web [main_binary] profile.pb.gz
|
||||
```
|
||||
|
||||
## Run pprof on interactive mode:
|
||||
|
||||
If no output formatting option is specified, pprof runs on interactive mode,
|
||||
where reads the profile and accepts interactive commands for visualization and
|
||||
refinement of the profile.
|
||||
|
||||
```
|
||||
pprof [main_binary] profile.pb.gz
|
||||
|
||||
This will open a simple shell that takes pprof commands to generate reports.
|
||||
Type 'help' for available commands/options.
|
||||
```
|
||||
|
||||
## Run pprof via a web interface
|
||||
|
||||
If the `-http` flag is specified, pprof starts a web server at
|
||||
the specified host:port that provides an interactive web-based interface to pprof.
|
||||
Host is optional, and is "localhost" by default. Port is optional, and is a
|
||||
random available port by default. `-http=":"` starts a server locally at
|
||||
a random port.
|
||||
|
||||
```
|
||||
pprof -http=[host]:[port] [main_binary] profile.pb.gz
|
||||
```
|
||||
|
||||
The preceding command should automatically open your web browser at
|
||||
the right page; if not, you can manually visit the specified port in
|
||||
your web browser.
|
||||
|
||||
## Using pprof with Linux Perf
|
||||
|
||||
pprof can read `perf.data` files generated by the
|
||||
[Linux perf](https://perf.wiki.kernel.org/index.php/Main_Page) tool by using the
|
||||
`perf_to_profile` program from the
|
||||
[perf_data_converter](https://github.com/google/perf_data_converter) package.
|
||||
|
||||
## Further documentation
|
||||
|
||||
See [doc/README.md](doc/README.md) for more detailed end-user documentation.
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution documentation.
|
||||
|
||||
See [proto/README.md](proto/README.md) for a description of the profile.proto format.
|
14
vendor/github.com/google/pprof/appveyor.yml
generated
vendored
Normal file
14
vendor/github.com/google/pprof/appveyor.yml
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
clone_folder: c:\go\src\github.com\google\pprof
|
||||
|
||||
install:
|
||||
- cinst graphviz
|
||||
|
||||
before_build:
|
||||
- go get github.com/ianlancetaylor/demangle
|
||||
- go get github.com/chzyer/readline
|
||||
|
||||
build_script:
|
||||
- go build github.com/google/pprof
|
||||
|
||||
test_script:
|
||||
- go test -v ./...
|
338
vendor/github.com/google/pprof/doc/README.md
generated
vendored
Normal file
338
vendor/github.com/google/pprof/doc/README.md
generated
vendored
Normal file
@@ -0,0 +1,338 @@
|
||||
# pprof
|
||||
|
||||
pprof is a tool for visualization and analysis of profiling data.
|
||||
|
||||
pprof reads a collection of profiling samples in profile.proto format and
|
||||
generates reports to visualize and help analyze the data. It can generate both
|
||||
text and graphical reports (through the use of the dot visualization package).
|
||||
|
||||
profile.proto is a protocol buffer that describes a set of callstacks
|
||||
and symbolization information. A common usage is to represent a set of
|
||||
sampled callstacks from statistical profiling. The format is
|
||||
described on the src/proto/profile.proto file. For details on protocol
|
||||
buffers, see https://developers.google.com/protocol-buffers
|
||||
|
||||
Profiles can be read from a local file, or over http. Multiple
|
||||
profiles of the same type can be aggregated or compared.
|
||||
|
||||
If the profile samples contain machine addresses, pprof can symbolize
|
||||
them through the use of the native binutils tools (addr2line and nm).
|
||||
|
||||
# pprof profiles
|
||||
|
||||
pprof operates on data in the profile.proto format. Each profile is a collection
|
||||
of samples, where each sample is associated to a point in a location hierarchy,
|
||||
one or more numeric values, and a set of labels. Often these profiles represents
|
||||
data collected through statistical sampling of a program, so each sample
|
||||
describes a program call stack and a number or weight of samples collected at a
|
||||
location. pprof is agnostic to the profile semantics, so other uses are
|
||||
possible. The interpretation of the reports generated by pprof depends on the
|
||||
semantics defined by the source of the profile.
|
||||
|
||||
# Usage Modes
|
||||
|
||||
There are few different ways of using `pprof`.
|
||||
|
||||
## Report generation
|
||||
|
||||
If a report format is requested on the command line:
|
||||
|
||||
pprof <format> [options] source
|
||||
|
||||
pprof will generate a report in the specified format and exit.
|
||||
Formats can be either text, or graphical. See below for details about
|
||||
supported formats, options, and sources.
|
||||
|
||||
## Interactive terminal use
|
||||
|
||||
Without a format specifier:
|
||||
|
||||
pprof [options] source
|
||||
|
||||
pprof will start an interactive shell in which the user can type
|
||||
commands. Type `help` to get online help.
|
||||
|
||||
## Web interface
|
||||
|
||||
If a host:port is specified on the command line:
|
||||
|
||||
pprof -http=[host]:[port] [options] source
|
||||
|
||||
pprof will start serving HTTP requests on the specified port. Visit
|
||||
the HTTP url corresponding to the port (typically `http://<host>:<port>/`)
|
||||
in a browser to see the interface.
|
||||
|
||||
# Details
|
||||
|
||||
The objective of pprof is to generate a report for a profile. The report is
|
||||
generated from a location hierarchy, which is reconstructed from the profile
|
||||
samples. Each location contains two values: *flat* is the value of the location
|
||||
itself, while *cum* is the value of the location plus all its
|
||||
descendants. Samples that include a location multiple times (eg for recursive
|
||||
functions) are counted only once per location.
|
||||
|
||||
## Options
|
||||
|
||||
*options* configure the contents of a report. Each option has a value,
|
||||
which can be boolean, numeric, or strings. While only one format can
|
||||
be specified, most options can be selected independently of each
|
||||
other.
|
||||
|
||||
Some common pprof options are:
|
||||
|
||||
* **-flat** [default], **-cum**: Sort entries based on their flat or cumulative
|
||||
weight respectively, on text reports.
|
||||
* **-functions** [default], **-filefunctions**, **-files**, **-lines**,
|
||||
**-addresses**: Generate the report using the specified granularity.
|
||||
* **-noinlines**: Attribute inlined functions to their first out-of-line caller.
|
||||
For example, a command like `pprof -list foo -noinlines profile.pb.gz` can be
|
||||
used to produce the annotated source listing attributing the metrics in the
|
||||
inlined functions to the out-of-line calling line.
|
||||
* **-nodecount= _int_:** Maximum number of entries in the report. pprof will only print
|
||||
this many entries and will use heuristics to select which entries to trim.
|
||||
* **-focus= _regex_:** Only include samples that include a report entry matching
|
||||
*regex*.
|
||||
* **-ignore= _regex_:** Do not include samples that include a report entry matching
|
||||
*regex*.
|
||||
* **-show\_from= _regex_:** Do not show entries above the first one that
|
||||
matches *regex*.
|
||||
* **-show= _regex_:** Only show entries that match *regex*.
|
||||
* **-hide= _regex_:** Do not show entries that match *regex*.
|
||||
|
||||
Each sample in a profile may include multiple values, representing different
|
||||
entities associated to the sample. pprof reports include a single sample value,
|
||||
which by convention is the last one specified in the report. The `sample_index=`
|
||||
option selects which value to use, and can be set to a number (from 0 to the
|
||||
number of values - 1) or the name of the sample value.
|
||||
|
||||
Sample values are numeric values associated to a unit. If pprof can recognize
|
||||
these units, it will attempt to scale the values to a suitable unit for
|
||||
visualization. The `unit=` option will force the use of a specific unit. For
|
||||
example, `unit=sec` will force any time values to be reported in
|
||||
seconds. pprof recognizes most common time and memory size units.
|
||||
|
||||
## Tag filtering
|
||||
|
||||
Samples in a profile may have tags. These tags have a name and a value; this
|
||||
value can be either numeric or a string. pprof can select samples from a
|
||||
profile based on these tags using the `-tagfocus` and `-tagignore` options.
|
||||
|
||||
Generally, these options work as follows:
|
||||
* **-tagfocus=_regex_** or **-tagfocus=_range_:** Restrict to samples with tags
|
||||
matched by regexp or in range.
|
||||
* **-tagignore=_regex_** or **-tagignore=_range_:** Discard samples with tags
|
||||
matched by regexp or in range.
|
||||
|
||||
When using `-tagfocus=regex` and `-tagignore=regex`, the regex will be compared
|
||||
to each value associated with each tag. If one specifies a value
|
||||
like `regex1,regex2`, then only samples with a tag value matching `regex1`
|
||||
and a tag value matching `regex2` will be kept.
|
||||
|
||||
In addition to being able to filter on tag values, one can specify the name of
|
||||
the tag which a certain value must be associated with using the notation
|
||||
`-tagfocus=tagName=value`. Here, the `tagName` must match the tag's name
|
||||
exactly, and the value can be either a regex or a range. If one specifies
|
||||
a value like `regex1,regex2`, then samples with a tag value (paired with the
|
||||
specified tag name) matching either `regex1` or matching `regex2` will match.
|
||||
|
||||
Here are examples explaining how `tagfocus` can be used:
|
||||
|
||||
* `-tagfocus 128kb:512kb` accepts a sample iff it has any numeric tag with
|
||||
memory value in the specified range.
|
||||
* `-tagfocus mytag=128kb:512kb` accepts a sample iff it has a numeric tag
|
||||
`mytag` with memory value in the specified range. There isn't a way to say
|
||||
`-tagfocus mytag=128kb:512kb,16kb:32kb`
|
||||
or `-tagfocus mytag=128kb:512kb,mytag2=128kb:512kb`. Just single value or
|
||||
range for numeric tags.
|
||||
* `-tagfocus someregex` accepts a sample iff it has any string tag with
|
||||
`tagName:tagValue` string matching specified regexp. In the future, this
|
||||
will change to accept sample iff it has any string tag with `tagValue` string
|
||||
matching specified regexp.
|
||||
* `-tagfocus mytag=myvalue1,myvalue2` matches if either of the two tag values
|
||||
are present.
|
||||
|
||||
`-tagignore` works similarly, except that it discards matching samples, instead
|
||||
of keeping them.
|
||||
|
||||
If both the `-tagignore` and `-tagfocus` expressions (either a regexp or a
|
||||
range) match a given sample, then the sample will be discarded.
|
||||
|
||||
## Text reports
|
||||
|
||||
pprof text reports show the location hierarchy in text format.
|
||||
|
||||
* **-text:** Prints the location entries, one per line, including the flat and cum
|
||||
values.
|
||||
* **-tree:** Prints each location entry with its predecessors and successors.
|
||||
* **-peek= _regex_:** Print the location entry with all its predecessors and
|
||||
successors, without trimming any entries.
|
||||
* **-traces:** Prints each sample with a location per line.
|
||||
|
||||
## Graphical reports
|
||||
|
||||
pprof can generate graphical reports on the DOT format, and convert them to
|
||||
multiple formats using the graphviz package.
|
||||
|
||||
These reports represent the location hierarchy as a graph, with a report entry
|
||||
represented as a node. Solid edges represent a direct connection between
|
||||
entries, while dotted edges represent a connection where some intermediate nodes
|
||||
have been removed. Nodes are removed using heuristics to limit the size of
|
||||
the graph, controlled by the *nodecount* option.
|
||||
|
||||
The size of each node represents the flat weight of the node, and the width of
|
||||
each edge represents the cumulative weight of all samples going through
|
||||
it. Nodes are colored according to their cumulative weight, highlighting the
|
||||
paths with the highest cum weight.
|
||||
|
||||
* **-dot:** Generates a report in .dot format. All other formats are generated from
|
||||
this one.
|
||||
* **-svg:** Generates a report in SVG format.
|
||||
* **-web:** Generates a report in SVG format on a temp file, and starts a web
|
||||
browser to view it.
|
||||
* **-png, -jpg, -gif, -pdf:** Generates a report in these formats,
|
||||
|
||||
## Annotated code
|
||||
|
||||
pprof can also generate reports of annotated source with samples associated to
|
||||
them. For these, the source or binaries must be locally available, and the
|
||||
profile must contain data with the appropriate level of detail.
|
||||
|
||||
pprof will look for source files on its current working directory and all its
|
||||
ancestors. pprof will look for binaries on the directories specified in the
|
||||
`$PPROF_BINARY_PATH` environment variable, by default `$HOME/pprof/binaries`
|
||||
(`%USERPROFILE%\pprof\binaries` on Windows). It will look binaries up by name,
|
||||
and if the profile includes linker build ids, it will also search for them in
|
||||
a directory named as the build id.
|
||||
|
||||
pprof uses the binutils tools to examine and disassemble the binaries. By
|
||||
default it will search for those tools in the current path, but it can also
|
||||
search for them in a directory pointed to by the environment variable
|
||||
`$PPROF_TOOLS`.
|
||||
|
||||
* **-list= _regex_:** Generates an annotated source listing for functions
|
||||
matching *regex*, with flat/cum weights for each source line.
|
||||
* **-disasm= _regex_:** Generates an annotated disassembly listing for
|
||||
functions matching *regex*.
|
||||
* **-weblist= _regex_:** Generates a source/assembly combined annotated listing
|
||||
for functions matching *regex*, and starts a web browser to display it.
|
||||
|
||||
## Comparing profiles
|
||||
|
||||
pprof can subtract one profile from another, provided the profiles are of
|
||||
compatible types (i.e. two heap profiles). pprof has two options which can be
|
||||
used to specify the filename or URL for a profile to be subtracted from the
|
||||
source profile:
|
||||
|
||||
* **-diff_base= _profile_:** useful for comparing two profiles. Percentages in
|
||||
the output are relative to the total of samples in the diff base profile.
|
||||
|
||||
* **-base= _profile_:** useful for subtracting a cumulative profile, like a
|
||||
[golang block profile](https://golang.org/doc/diagnostics.html#profiling),
|
||||
from another cumulative profile collected from the same program at a later time.
|
||||
When comparing cumulative profiles collected on the same program, percentages in
|
||||
the output are relative to the difference between the total for the source
|
||||
profile and the total for the base profile.
|
||||
|
||||
The **-normalize** flag can be used when a base profile is specified with either
|
||||
the `-diff_base` or the `-base` option. This flag scales the source profile so
|
||||
that the total of samples in the source profile is equal to the total of samples
|
||||
in the base profile prior to subtracting the base profile from the source
|
||||
profile. Useful for determining the relative differences between profiles, for
|
||||
example, which profile has a larger percentage of CPU time used in a particular
|
||||
function.
|
||||
|
||||
When using the **-diff_base** option, some report entries may have negative
|
||||
values. If the merged profile is output as a protocol buffer, all samples in the
|
||||
diff base profile will have a label with the key "pprof::base" and a value of
|
||||
"true". If pprof is then used to look at the merged profile, it will behave as
|
||||
if separate source and base profiles were passed in.
|
||||
|
||||
When using the **-base** option to subtract one cumulative profile from another
|
||||
collected on the same program at a later time, percentages will be relative to
|
||||
the difference between the total for the source profile and the total for
|
||||
the base profile, and all values will be positive. In the general case, some
|
||||
report entries may have negative values and percentages will be relative to the
|
||||
total of the absolute value of all samples when aggregated at the address level.
|
||||
|
||||
# Fetching profiles
|
||||
|
||||
pprof can read profiles from a file or directly from a URL over http or https.
|
||||
Its native format is a gzipped profile.proto file, but it can
|
||||
also accept some legacy formats generated by
|
||||
[gperftools](https://github.com/gperftools/gperftools).
|
||||
|
||||
When fetching from a URL handler, pprof accepts options to indicate how much to
|
||||
wait for the profile.
|
||||
|
||||
* **-seconds= _int_:** Makes pprof request for a profile with the specified duration
|
||||
in seconds. Only makes sense for profiles based on elapsed time, such as CPU
|
||||
profiles.
|
||||
* **-timeout= _int_:** Makes pprof wait for the specified timeout when retrieving a
|
||||
profile over http. If not specified, pprof will use heuristics to determine a
|
||||
reasonable timeout.
|
||||
|
||||
pprof also accepts options which allow a user to specify TLS certificates to
|
||||
use when fetching or symbolizing a profile from a protected endpoint. For more
|
||||
information about generating these certificates, see
|
||||
https://docs.docker.com/engine/security/https/.
|
||||
|
||||
* **-tls\_cert= _/path/to/cert_:** File containing the TLS client certificate
|
||||
to be used when fetching and symbolizing profiles.
|
||||
* **-tls\_key= _/path/to/key_:** File containing the TLS private key to be used
|
||||
when fetching and symbolizing profiles.
|
||||
* **-tls\_ca= _/path/to/ca_:** File containing the certificate authority to be
|
||||
used when fetching and symbolizing profiles.
|
||||
|
||||
pprof also supports skipping verification of the server's certificate chain and
|
||||
host name when collecting or symbolizing a profile. To skip this verification,
|
||||
use "https+insecure" in place of "https" in the URL.
|
||||
|
||||
If multiple profiles are specified, pprof will fetch them all and merge
|
||||
them. This is useful to combine profiles from multiple processes of a
|
||||
distributed job. The profiles may be from different programs but must be
|
||||
compatible (for example, CPU profiles cannot be combined with heap profiles).
|
||||
|
||||
## Symbolization
|
||||
|
||||
pprof can add symbol information to a profile that was collected only with
|
||||
address information. This is useful for profiles for compiled languages, where
|
||||
it may not be easy or even possible for the profile source to include function
|
||||
names or source coordinates.
|
||||
|
||||
pprof can extract the symbol information locally by examining the binaries using
|
||||
the binutils tools, or it can ask running jobs that provide a symbolization
|
||||
interface.
|
||||
|
||||
pprof will attempt symbolizing profiles by default, and its `-symbolize` option
|
||||
provides some control over symbolization:
|
||||
|
||||
* **-symbolize=none:** Disables any symbolization from pprof.
|
||||
|
||||
* **-symbolize=local:** Only attempts symbolizing the profile from local
|
||||
binaries using the binutils tools.
|
||||
|
||||
* **-symbolize=remote:** Only attempts to symbolize running jobs by contacting
|
||||
their symbolization handler.
|
||||
|
||||
For local symbolization, pprof will look for the binaries on the paths specified
|
||||
by the profile, and then it will search for them on the path specified by the
|
||||
environment variable `$PPROF_BINARY_PATH`. Also, the name of the main binary can
|
||||
be passed directly to pprof as its first parameter, to override the name or
|
||||
location of the main binary of the profile, like this:
|
||||
|
||||
pprof /path/to/binary profile.pb.gz
|
||||
|
||||
By default pprof will attempt to demangle and simplify C++ names, to provide
|
||||
readable names for C++ symbols. It will aggressively discard template and
|
||||
function parameters. This can be controlled with the `-symbolize=demangle`
|
||||
option. Note that for remote symbolization mangled names may not be provided by
|
||||
the symbolization handler.
|
||||
|
||||
* **-symbolize=demangle=none:** Do not perform any demangling. Show mangled
|
||||
names if available.
|
||||
|
||||
* **-symbolize=demangle=full:** Demangle, but do not perform any
|
||||
simplification. Show full demangled names if available.
|
||||
|
||||
* **-symbolize=demangle=templates:** Demangle, and trim function parameters, but
|
||||
not template parameters.
|
303
vendor/github.com/google/pprof/driver/driver.go
generated
vendored
Normal file
303
vendor/github.com/google/pprof/driver/driver.go
generated
vendored
Normal file
@@ -0,0 +1,303 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package driver provides an external entry point to the pprof driver.
|
||||
package driver
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
internaldriver "github.com/google/pprof/internal/driver"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
// PProf acquires a profile, and symbolizes it using a profile
|
||||
// manager. Then it generates a report formatted according to the
|
||||
// options selected through the flags package.
|
||||
func PProf(o *Options) error {
|
||||
return internaldriver.PProf(o.internalOptions())
|
||||
}
|
||||
|
||||
func (o *Options) internalOptions() *plugin.Options {
|
||||
var obj plugin.ObjTool
|
||||
if o.Obj != nil {
|
||||
obj = &internalObjTool{o.Obj}
|
||||
}
|
||||
var sym plugin.Symbolizer
|
||||
if o.Sym != nil {
|
||||
sym = &internalSymbolizer{o.Sym}
|
||||
}
|
||||
var httpServer func(args *plugin.HTTPServerArgs) error
|
||||
if o.HTTPServer != nil {
|
||||
httpServer = func(args *plugin.HTTPServerArgs) error {
|
||||
return o.HTTPServer(((*HTTPServerArgs)(args)))
|
||||
}
|
||||
}
|
||||
return &plugin.Options{
|
||||
Writer: o.Writer,
|
||||
Flagset: o.Flagset,
|
||||
Fetch: o.Fetch,
|
||||
Sym: sym,
|
||||
Obj: obj,
|
||||
UI: o.UI,
|
||||
HTTPServer: httpServer,
|
||||
HTTPTransport: o.HTTPTransport,
|
||||
}
|
||||
}
|
||||
|
||||
// HTTPServerArgs contains arguments needed by an HTTP server that
|
||||
// is exporting a pprof web interface.
|
||||
type HTTPServerArgs plugin.HTTPServerArgs
|
||||
|
||||
// Options groups all the optional plugins into pprof.
|
||||
type Options struct {
|
||||
Writer Writer
|
||||
Flagset FlagSet
|
||||
Fetch Fetcher
|
||||
Sym Symbolizer
|
||||
Obj ObjTool
|
||||
UI UI
|
||||
HTTPServer func(*HTTPServerArgs) error
|
||||
HTTPTransport http.RoundTripper
|
||||
}
|
||||
|
||||
// Writer provides a mechanism to write data under a certain name,
|
||||
// typically a filename.
|
||||
type Writer interface {
|
||||
Open(name string) (io.WriteCloser, error)
|
||||
}
|
||||
|
||||
// A FlagSet creates and parses command-line flags.
|
||||
// It is similar to the standard flag.FlagSet.
|
||||
type FlagSet interface {
|
||||
// Bool, Int, Float64, and String define new flags,
|
||||
// like the functions of the same name in package flag.
|
||||
Bool(name string, def bool, usage string) *bool
|
||||
Int(name string, def int, usage string) *int
|
||||
Float64(name string, def float64, usage string) *float64
|
||||
String(name string, def string, usage string) *string
|
||||
|
||||
// BoolVar, IntVar, Float64Var, and StringVar define new flags referencing
|
||||
// a given pointer, like the functions of the same name in package flag.
|
||||
BoolVar(pointer *bool, name string, def bool, usage string)
|
||||
IntVar(pointer *int, name string, def int, usage string)
|
||||
Float64Var(pointer *float64, name string, def float64, usage string)
|
||||
StringVar(pointer *string, name string, def string, usage string)
|
||||
|
||||
// StringList is similar to String but allows multiple values for a
|
||||
// single flag
|
||||
StringList(name string, def string, usage string) *[]*string
|
||||
|
||||
// ExtraUsage returns any additional text that should be printed after the
|
||||
// standard usage message. The extra usage message returned includes all text
|
||||
// added with AddExtraUsage().
|
||||
// The typical use of ExtraUsage is to show any custom flags defined by the
|
||||
// specific pprof plugins being used.
|
||||
ExtraUsage() string
|
||||
|
||||
// AddExtraUsage appends additional text to the end of the extra usage message.
|
||||
AddExtraUsage(eu string)
|
||||
|
||||
// Parse initializes the flags with their values for this run
|
||||
// and returns the non-flag command line arguments.
|
||||
// If an unknown flag is encountered or there are no arguments,
|
||||
// Parse should call usage and return nil.
|
||||
Parse(usage func()) []string
|
||||
}
|
||||
|
||||
// A Fetcher reads and returns the profile named by src, using
|
||||
// the specified duration and timeout. It returns the fetched
|
||||
// profile and a string indicating a URL from where the profile
|
||||
// was fetched, which may be different than src.
|
||||
type Fetcher interface {
|
||||
Fetch(src string, duration, timeout time.Duration) (*profile.Profile, string, error)
|
||||
}
|
||||
|
||||
// A Symbolizer introduces symbol information into a profile.
|
||||
type Symbolizer interface {
|
||||
Symbolize(mode string, srcs MappingSources, prof *profile.Profile) error
|
||||
}
|
||||
|
||||
// MappingSources map each profile.Mapping to the source of the profile.
|
||||
// The key is either Mapping.File or Mapping.BuildId.
|
||||
type MappingSources map[string][]struct {
|
||||
Source string // URL of the source the mapping was collected from
|
||||
Start uint64 // delta applied to addresses from this source (to represent Merge adjustments)
|
||||
}
|
||||
|
||||
// An ObjTool inspects shared libraries and executable files.
|
||||
type ObjTool interface {
|
||||
// Open opens the named object file. If the object is a shared
|
||||
// library, start/limit/offset are the addresses where it is mapped
|
||||
// into memory in the address space being inspected.
|
||||
Open(file string, start, limit, offset uint64) (ObjFile, error)
|
||||
|
||||
// Disasm disassembles the named object file, starting at
|
||||
// the start address and stopping at (before) the end address.
|
||||
Disasm(file string, start, end uint64) ([]Inst, error)
|
||||
}
|
||||
|
||||
// An Inst is a single instruction in an assembly listing.
|
||||
type Inst struct {
|
||||
Addr uint64 // virtual address of instruction
|
||||
Text string // instruction text
|
||||
Function string // function name
|
||||
File string // source file
|
||||
Line int // source line
|
||||
}
|
||||
|
||||
// An ObjFile is a single object file: a shared library or executable.
|
||||
type ObjFile interface {
|
||||
// Name returns the underlying file name, if available.
|
||||
Name() string
|
||||
|
||||
// Base returns the base address to use when looking up symbols in the file.
|
||||
Base() uint64
|
||||
|
||||
// BuildID returns the GNU build ID of the file, or an empty string.
|
||||
BuildID() string
|
||||
|
||||
// SourceLine reports the source line information for a given
|
||||
// address in the file. Due to inlining, the source line information
|
||||
// is in general a list of positions representing a call stack,
|
||||
// with the leaf function first.
|
||||
SourceLine(addr uint64) ([]Frame, error)
|
||||
|
||||
// Symbols returns a list of symbols in the object file.
|
||||
// If r is not nil, Symbols restricts the list to symbols
|
||||
// with names matching the regular expression.
|
||||
// If addr is not zero, Symbols restricts the list to symbols
|
||||
// containing that address.
|
||||
Symbols(r *regexp.Regexp, addr uint64) ([]*Sym, error)
|
||||
|
||||
// Close closes the file, releasing associated resources.
|
||||
Close() error
|
||||
}
|
||||
|
||||
// A Frame describes a single line in a source file.
|
||||
type Frame struct {
|
||||
Func string // name of function
|
||||
File string // source file name
|
||||
Line int // line in file
|
||||
}
|
||||
|
||||
// A Sym describes a single symbol in an object file.
|
||||
type Sym struct {
|
||||
Name []string // names of symbol (many if symbol was dedup'ed)
|
||||
File string // object file containing symbol
|
||||
Start uint64 // start virtual address
|
||||
End uint64 // virtual address of last byte in sym (Start+size-1)
|
||||
}
|
||||
|
||||
// A UI manages user interactions.
|
||||
type UI interface {
|
||||
// Read returns a line of text (a command) read from the user.
|
||||
// prompt is printed before reading the command.
|
||||
ReadLine(prompt string) (string, error)
|
||||
|
||||
// Print shows a message to the user.
|
||||
// It formats the text as fmt.Print would and adds a final \n if not already present.
|
||||
// For line-based UI, Print writes to standard error.
|
||||
// (Standard output is reserved for report data.)
|
||||
Print(...interface{})
|
||||
|
||||
// PrintErr shows an error message to the user.
|
||||
// It formats the text as fmt.Print would and adds a final \n if not already present.
|
||||
// For line-based UI, PrintErr writes to standard error.
|
||||
PrintErr(...interface{})
|
||||
|
||||
// IsTerminal returns whether the UI is known to be tied to an
|
||||
// interactive terminal (as opposed to being redirected to a file).
|
||||
IsTerminal() bool
|
||||
|
||||
// WantBrowser indicates whether browser should be opened with the -http option.
|
||||
WantBrowser() bool
|
||||
|
||||
// SetAutoComplete instructs the UI to call complete(cmd) to obtain
|
||||
// the auto-completion of cmd, if the UI supports auto-completion at all.
|
||||
SetAutoComplete(complete func(string) string)
|
||||
}
|
||||
|
||||
// internalObjTool is a wrapper to map from the pprof external
|
||||
// interface to the internal interface.
|
||||
type internalObjTool struct {
|
||||
ObjTool
|
||||
}
|
||||
|
||||
func (o *internalObjTool) Open(file string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
f, err := o.ObjTool.Open(file, start, limit, offset)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &internalObjFile{f}, err
|
||||
}
|
||||
|
||||
type internalObjFile struct {
|
||||
ObjFile
|
||||
}
|
||||
|
||||
func (f *internalObjFile) SourceLine(frame uint64) ([]plugin.Frame, error) {
|
||||
frames, err := f.ObjFile.SourceLine(frame)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var pluginFrames []plugin.Frame
|
||||
for _, f := range frames {
|
||||
pluginFrames = append(pluginFrames, plugin.Frame(f))
|
||||
}
|
||||
return pluginFrames, nil
|
||||
}
|
||||
|
||||
func (f *internalObjFile) Symbols(r *regexp.Regexp, addr uint64) ([]*plugin.Sym, error) {
|
||||
syms, err := f.ObjFile.Symbols(r, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var pluginSyms []*plugin.Sym
|
||||
for _, s := range syms {
|
||||
ps := plugin.Sym(*s)
|
||||
pluginSyms = append(pluginSyms, &ps)
|
||||
}
|
||||
return pluginSyms, nil
|
||||
}
|
||||
|
||||
func (o *internalObjTool) Disasm(file string, start, end uint64) ([]plugin.Inst, error) {
|
||||
insts, err := o.ObjTool.Disasm(file, start, end)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var pluginInst []plugin.Inst
|
||||
for _, inst := range insts {
|
||||
pluginInst = append(pluginInst, plugin.Inst(inst))
|
||||
}
|
||||
return pluginInst, nil
|
||||
}
|
||||
|
||||
// internalSymbolizer is a wrapper to map from the pprof external
|
||||
// interface to the internal interface.
|
||||
type internalSymbolizer struct {
|
||||
Symbolizer
|
||||
}
|
||||
|
||||
func (s *internalSymbolizer) Symbolize(mode string, srcs plugin.MappingSources, prof *profile.Profile) error {
|
||||
isrcs := MappingSources{}
|
||||
for m, s := range srcs {
|
||||
isrcs[m] = s
|
||||
}
|
||||
return s.Symbolizer.Symbolize(mode, isrcs, prof)
|
||||
}
|
37
vendor/github.com/google/pprof/fuzz/README.md
generated
vendored
Normal file
37
vendor/github.com/google/pprof/fuzz/README.md
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
This is an explanation of how to do fuzzing of ParseData. This uses github.com/dvyukov/go-fuzz/ for fuzzing.
|
||||
|
||||
# How to use
|
||||
First, get go-fuzz
|
||||
```
|
||||
$ go get github.com/dvyukov/go-fuzz/go-fuzz
|
||||
$ go get github.com/dvyukov/go-fuzz/go-fuzz-build
|
||||
```
|
||||
|
||||
Build the test program by calling the following command
|
||||
(assuming you have files for pprof located in github.com/google/pprof within go's src folder)
|
||||
|
||||
```
|
||||
$ go-fuzz-build github.com/google/pprof/fuzz
|
||||
```
|
||||
The above command will produce pprof-fuzz.zip
|
||||
|
||||
|
||||
Now you can run the fuzzer by calling
|
||||
|
||||
```
|
||||
$ go-fuzz -bin=./pprof-fuzz.zip -workdir=fuzz
|
||||
```
|
||||
|
||||
This will save a corpus of files used by the fuzzer in ./fuzz/corpus, and
|
||||
all files that caused ParseData to crash in ./fuzz/crashers.
|
||||
|
||||
For more details on the usage, see github.com/dvyukov/go-fuzz/
|
||||
|
||||
# About the to corpus
|
||||
|
||||
Right now, fuzz/corpus contains the corpus initially given to the fuzzer
|
||||
|
||||
If using the above commands, fuzz/corpus will be used to generate the initial corpus during fuzz testing.
|
||||
|
||||
One can add profiles into the corpus by placing these files in the corpus directory (fuzz/corpus)
|
||||
prior to calling go-fuzz-build.
|
BIN
vendor/github.com/google/pprof/fuzz/corpus/cppbench.cpu.pb
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/fuzz/corpus/cppbench.cpu.pb
generated
vendored
Normal file
Binary file not shown.
0
vendor/github.com/google/pprof/fuzz/corpus/empty
generated
vendored
Normal file
0
vendor/github.com/google/pprof/fuzz/corpus/empty
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/fuzz/corpus/go.crc32.cpu.pb
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/fuzz/corpus/go.crc32.cpu.pb
generated
vendored
Normal file
Binary file not shown.
BIN
vendor/github.com/google/pprof/fuzz/corpus/gobench.cpu.pb
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/fuzz/corpus/gobench.cpu.pb
generated
vendored
Normal file
Binary file not shown.
BIN
vendor/github.com/google/pprof/fuzz/corpus/java.cpu.pb
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/fuzz/corpus/java.cpu.pb
generated
vendored
Normal file
Binary file not shown.
44
vendor/github.com/google/pprof/fuzz/fuzz_test.go
generated
vendored
Normal file
44
vendor/github.com/google/pprof/fuzz/fuzz_test.go
generated
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package pprof
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
func TestParseData(t *testing.T) {
|
||||
if runtime.GOOS == "nacl" {
|
||||
t.Skip("no direct filesystem access on Nacl")
|
||||
}
|
||||
|
||||
const path = "testdata/"
|
||||
files, err := ioutil.ReadDir(path)
|
||||
if err != nil {
|
||||
t.Errorf("Problem reading directory %s : %v", path, err)
|
||||
}
|
||||
for _, f := range files {
|
||||
file := path + f.Name()
|
||||
inbytes, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
t.Errorf("Problem reading file: %s : %v", file, err)
|
||||
continue
|
||||
}
|
||||
profile.ParseData(inbytes)
|
||||
}
|
||||
}
|
27
vendor/github.com/google/pprof/fuzz/main.go
generated
vendored
Normal file
27
vendor/github.com/google/pprof/fuzz/main.go
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package pprof is used in conjunction with github.com/dvyukov/go-fuzz/go-fuzz
|
||||
// to fuzz ParseData function.
|
||||
package pprof
|
||||
|
||||
import (
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
// Fuzz can be used with https://github.com/dvyukov/go-fuzz to do fuzz testing on ParseData
|
||||
func Fuzz(data []byte) int {
|
||||
profile.ParseData(data)
|
||||
return 0
|
||||
}
|
2
vendor/github.com/google/pprof/fuzz/testdata/7e3c92482f6f39fc502b822ded792c589849cca8
generated
vendored
Normal file
2
vendor/github.com/google/pprof/fuzz/testdata/7e3c92482f6f39fc502b822ded792c589849cca8
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
--- heapz 1 ---
|
||||
0 0 @ 0
|
242
vendor/github.com/google/pprof/internal/binutils/addr2liner.go
generated
vendored
Normal file
242
vendor/github.com/google/pprof/internal/binutils/addr2liner.go
generated
vendored
Normal file
@@ -0,0 +1,242 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultAddr2line = "addr2line"
|
||||
|
||||
// addr2line may produce multiple lines of output. We
|
||||
// use this sentinel to identify the end of the output.
|
||||
sentinel = ^uint64(0)
|
||||
)
|
||||
|
||||
// addr2Liner is a connection to an addr2line command for obtaining
|
||||
// address and line number information from a binary.
|
||||
type addr2Liner struct {
|
||||
mu sync.Mutex
|
||||
rw lineReaderWriter
|
||||
base uint64
|
||||
|
||||
// nm holds an addr2Liner using nm tool. Certain versions of addr2line
|
||||
// produce incomplete names due to
|
||||
// https://sourceware.org/bugzilla/show_bug.cgi?id=17541. As a workaround,
|
||||
// the names from nm are used when they look more complete. See addrInfo()
|
||||
// code below for the exact heuristic.
|
||||
nm *addr2LinerNM
|
||||
}
|
||||
|
||||
// lineReaderWriter is an interface to abstract the I/O to an addr2line
|
||||
// process. It writes a line of input to the job, and reads its output
|
||||
// one line at a time.
|
||||
type lineReaderWriter interface {
|
||||
write(string) error
|
||||
readLine() (string, error)
|
||||
close()
|
||||
}
|
||||
|
||||
type addr2LinerJob struct {
|
||||
cmd *exec.Cmd
|
||||
in io.WriteCloser
|
||||
out *bufio.Reader
|
||||
}
|
||||
|
||||
func (a *addr2LinerJob) write(s string) error {
|
||||
_, err := fmt.Fprint(a.in, s+"\n")
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *addr2LinerJob) readLine() (string, error) {
|
||||
return a.out.ReadString('\n')
|
||||
}
|
||||
|
||||
// close releases any resources used by the addr2liner object.
|
||||
func (a *addr2LinerJob) close() {
|
||||
a.in.Close()
|
||||
a.cmd.Wait()
|
||||
}
|
||||
|
||||
// newAddr2liner starts the given addr2liner command reporting
|
||||
// information about the given executable file. If file is a shared
|
||||
// library, base should be the address at which it was mapped in the
|
||||
// program under consideration.
|
||||
func newAddr2Liner(cmd, file string, base uint64) (*addr2Liner, error) {
|
||||
if cmd == "" {
|
||||
cmd = defaultAddr2line
|
||||
}
|
||||
|
||||
j := &addr2LinerJob{
|
||||
cmd: exec.Command(cmd, "-aif", "-e", file),
|
||||
}
|
||||
|
||||
var err error
|
||||
if j.in, err = j.cmd.StdinPipe(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
outPipe, err := j.cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
j.out = bufio.NewReader(outPipe)
|
||||
if err := j.cmd.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
a := &addr2Liner{
|
||||
rw: j,
|
||||
base: base,
|
||||
}
|
||||
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func (d *addr2Liner) readString() (string, error) {
|
||||
s, err := d.rw.readLine()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return strings.TrimSpace(s), nil
|
||||
}
|
||||
|
||||
// readFrame parses the addr2line output for a single address. It
|
||||
// returns a populated plugin.Frame and whether it has reached the end of the
|
||||
// data.
|
||||
func (d *addr2Liner) readFrame() (plugin.Frame, bool) {
|
||||
funcname, err := d.readString()
|
||||
if err != nil {
|
||||
return plugin.Frame{}, true
|
||||
}
|
||||
if strings.HasPrefix(funcname, "0x") {
|
||||
// If addr2line returns a hex address we can assume it is the
|
||||
// sentinel. Read and ignore next two lines of output from
|
||||
// addr2line
|
||||
d.readString()
|
||||
d.readString()
|
||||
return plugin.Frame{}, true
|
||||
}
|
||||
|
||||
fileline, err := d.readString()
|
||||
if err != nil {
|
||||
return plugin.Frame{}, true
|
||||
}
|
||||
|
||||
linenumber := 0
|
||||
|
||||
if funcname == "??" {
|
||||
funcname = ""
|
||||
}
|
||||
|
||||
if fileline == "??:0" {
|
||||
fileline = ""
|
||||
} else {
|
||||
if i := strings.LastIndex(fileline, ":"); i >= 0 {
|
||||
// Remove discriminator, if present
|
||||
if disc := strings.Index(fileline, " (discriminator"); disc > 0 {
|
||||
fileline = fileline[:disc]
|
||||
}
|
||||
// If we cannot parse a number after the last ":", keep it as
|
||||
// part of the filename.
|
||||
if line, err := strconv.Atoi(fileline[i+1:]); err == nil {
|
||||
linenumber = line
|
||||
fileline = fileline[:i]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return plugin.Frame{
|
||||
Func: funcname,
|
||||
File: fileline,
|
||||
Line: linenumber}, false
|
||||
}
|
||||
|
||||
func (d *addr2Liner) rawAddrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
if err := d.rw.write(fmt.Sprintf("%x", addr-d.base)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := d.rw.write(fmt.Sprintf("%x", sentinel)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
resp, err := d.readString()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(resp, "0x") {
|
||||
return nil, fmt.Errorf("unexpected addr2line output: %s", resp)
|
||||
}
|
||||
|
||||
var stack []plugin.Frame
|
||||
for {
|
||||
frame, end := d.readFrame()
|
||||
if end {
|
||||
break
|
||||
}
|
||||
|
||||
if frame != (plugin.Frame{}) {
|
||||
stack = append(stack, frame)
|
||||
}
|
||||
}
|
||||
return stack, err
|
||||
}
|
||||
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (d *addr2Liner) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
stack, err := d.rawAddrInfo(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Certain versions of addr2line produce incomplete names due to
|
||||
// https://sourceware.org/bugzilla/show_bug.cgi?id=17541. Attempt to replace
|
||||
// the name with a better one from nm.
|
||||
if len(stack) > 0 && d.nm != nil {
|
||||
nm, err := d.nm.addrInfo(addr)
|
||||
if err == nil && len(nm) > 0 {
|
||||
// Last entry in frame list should match since it is non-inlined. As a
|
||||
// simple heuristic, we only switch to the nm-based name if it is longer
|
||||
// by 2 or more characters. We consider nm names that are longer by 1
|
||||
// character insignificant to avoid replacing foo with _foo on MacOS (for
|
||||
// unknown reasons read2line produces the former and nm produces the
|
||||
// latter on MacOS even though both tools are asked to produce mangled
|
||||
// names).
|
||||
nmName := nm[len(nm)-1].Func
|
||||
a2lName := stack[len(stack)-1].Func
|
||||
if len(nmName) > len(a2lName)+1 {
|
||||
stack[len(stack)-1].Func = nmName
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return stack, nil
|
||||
}
|
175
vendor/github.com/google/pprof/internal/binutils/addr2liner_llvm.go
generated
vendored
Normal file
175
vendor/github.com/google/pprof/internal/binutils/addr2liner_llvm.go
generated
vendored
Normal file
@@ -0,0 +1,175 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultLLVMSymbolizer = "llvm-symbolizer"
|
||||
)
|
||||
|
||||
// llvmSymbolizer is a connection to an llvm-symbolizer command for
|
||||
// obtaining address and line number information from a binary.
|
||||
type llvmSymbolizer struct {
|
||||
sync.Mutex
|
||||
filename string
|
||||
rw lineReaderWriter
|
||||
base uint64
|
||||
}
|
||||
|
||||
type llvmSymbolizerJob struct {
|
||||
cmd *exec.Cmd
|
||||
in io.WriteCloser
|
||||
out *bufio.Reader
|
||||
}
|
||||
|
||||
func (a *llvmSymbolizerJob) write(s string) error {
|
||||
_, err := fmt.Fprint(a.in, s+"\n")
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *llvmSymbolizerJob) readLine() (string, error) {
|
||||
return a.out.ReadString('\n')
|
||||
}
|
||||
|
||||
// close releases any resources used by the llvmSymbolizer object.
|
||||
func (a *llvmSymbolizerJob) close() {
|
||||
a.in.Close()
|
||||
a.cmd.Wait()
|
||||
}
|
||||
|
||||
// newLlvmSymbolizer starts the given llvmSymbolizer command reporting
|
||||
// information about the given executable file. If file is a shared
|
||||
// library, base should be the address at which it was mapped in the
|
||||
// program under consideration.
|
||||
func newLLVMSymbolizer(cmd, file string, base uint64) (*llvmSymbolizer, error) {
|
||||
if cmd == "" {
|
||||
cmd = defaultLLVMSymbolizer
|
||||
}
|
||||
|
||||
j := &llvmSymbolizerJob{
|
||||
cmd: exec.Command(cmd, "-inlining", "-demangle=false"),
|
||||
}
|
||||
|
||||
var err error
|
||||
if j.in, err = j.cmd.StdinPipe(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
outPipe, err := j.cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
j.out = bufio.NewReader(outPipe)
|
||||
if err := j.cmd.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
a := &llvmSymbolizer{
|
||||
filename: file,
|
||||
rw: j,
|
||||
base: base,
|
||||
}
|
||||
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func (d *llvmSymbolizer) readString() (string, error) {
|
||||
s, err := d.rw.readLine()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return strings.TrimSpace(s), nil
|
||||
}
|
||||
|
||||
// readFrame parses the llvm-symbolizer output for a single address. It
|
||||
// returns a populated plugin.Frame and whether it has reached the end of the
|
||||
// data.
|
||||
func (d *llvmSymbolizer) readFrame() (plugin.Frame, bool) {
|
||||
funcname, err := d.readString()
|
||||
if err != nil {
|
||||
return plugin.Frame{}, true
|
||||
}
|
||||
|
||||
switch funcname {
|
||||
case "":
|
||||
return plugin.Frame{}, true
|
||||
case "??":
|
||||
funcname = ""
|
||||
}
|
||||
|
||||
fileline, err := d.readString()
|
||||
if err != nil {
|
||||
return plugin.Frame{Func: funcname}, true
|
||||
}
|
||||
|
||||
linenumber := 0
|
||||
if fileline == "??:0" {
|
||||
fileline = ""
|
||||
} else {
|
||||
switch split := strings.Split(fileline, ":"); len(split) {
|
||||
case 1:
|
||||
// filename
|
||||
fileline = split[0]
|
||||
case 2, 3:
|
||||
// filename:line , or
|
||||
// filename:line:disc , or
|
||||
fileline = split[0]
|
||||
if line, err := strconv.Atoi(split[1]); err == nil {
|
||||
linenumber = line
|
||||
}
|
||||
default:
|
||||
// Unrecognized, ignore
|
||||
}
|
||||
}
|
||||
|
||||
return plugin.Frame{Func: funcname, File: fileline, Line: linenumber}, false
|
||||
}
|
||||
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (d *llvmSymbolizer) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if err := d.rw.write(fmt.Sprintf("%s 0x%x", d.filename, addr-d.base)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var stack []plugin.Frame
|
||||
for {
|
||||
frame, end := d.readFrame()
|
||||
if end {
|
||||
break
|
||||
}
|
||||
|
||||
if frame != (plugin.Frame{}) {
|
||||
stack = append(stack, frame)
|
||||
}
|
||||
}
|
||||
|
||||
return stack, nil
|
||||
}
|
124
vendor/github.com/google/pprof/internal/binutils/addr2liner_nm.go
generated
vendored
Normal file
124
vendor/github.com/google/pprof/internal/binutils/addr2liner_nm.go
generated
vendored
Normal file
@@ -0,0 +1,124 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"io"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultNM = "nm"
|
||||
)
|
||||
|
||||
// addr2LinerNM is a connection to an nm command for obtaining address
|
||||
// information from a binary.
|
||||
type addr2LinerNM struct {
|
||||
m []symbolInfo // Sorted list of addresses from binary.
|
||||
}
|
||||
|
||||
type symbolInfo struct {
|
||||
address uint64
|
||||
name string
|
||||
}
|
||||
|
||||
// newAddr2LinerNM starts the given nm command reporting information about the
|
||||
// given executable file. If file is a shared library, base should be
|
||||
// the address at which it was mapped in the program under
|
||||
// consideration.
|
||||
func newAddr2LinerNM(cmd, file string, base uint64) (*addr2LinerNM, error) {
|
||||
if cmd == "" {
|
||||
cmd = defaultNM
|
||||
}
|
||||
var b bytes.Buffer
|
||||
c := exec.Command(cmd, "-n", file)
|
||||
c.Stdout = &b
|
||||
if err := c.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseAddr2LinerNM(base, &b)
|
||||
}
|
||||
|
||||
func parseAddr2LinerNM(base uint64, nm io.Reader) (*addr2LinerNM, error) {
|
||||
a := &addr2LinerNM{
|
||||
m: []symbolInfo{},
|
||||
}
|
||||
|
||||
// Parse nm output and populate symbol map.
|
||||
// Skip lines we fail to parse.
|
||||
buf := bufio.NewReader(nm)
|
||||
for {
|
||||
line, err := buf.ReadString('\n')
|
||||
if line == "" && err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
line = strings.TrimSpace(line)
|
||||
fields := strings.SplitN(line, " ", 3)
|
||||
if len(fields) != 3 {
|
||||
continue
|
||||
}
|
||||
address, err := strconv.ParseUint(fields[0], 16, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
a.m = append(a.m, symbolInfo{
|
||||
address: address + base,
|
||||
name: fields[2],
|
||||
})
|
||||
}
|
||||
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (a *addr2LinerNM) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
if len(a.m) == 0 || addr < a.m[0].address || addr > a.m[len(a.m)-1].address {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Binary search. Search until low, high are separated by 1.
|
||||
low, high := 0, len(a.m)
|
||||
for low+1 < high {
|
||||
mid := (low + high) / 2
|
||||
v := a.m[mid].address
|
||||
if addr == v {
|
||||
low = mid
|
||||
break
|
||||
} else if addr > v {
|
||||
low = mid
|
||||
} else {
|
||||
high = mid
|
||||
}
|
||||
}
|
||||
|
||||
// Address is between a.m[low] and a.m[high].
|
||||
// Pick low, as it represents [low, high).
|
||||
f := []plugin.Frame{
|
||||
{
|
||||
Func: a.m[low].name,
|
||||
},
|
||||
}
|
||||
return f, nil
|
||||
}
|
462
vendor/github.com/google/pprof/internal/binutils/binutils.go
generated
vendored
Normal file
462
vendor/github.com/google/pprof/internal/binutils/binutils.go
generated
vendored
Normal file
@@ -0,0 +1,462 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package binutils provides access to the GNU binutils.
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"debug/elf"
|
||||
"debug/macho"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/elfexec"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
// A Binutils implements plugin.ObjTool by invoking the GNU binutils.
|
||||
type Binutils struct {
|
||||
mu sync.Mutex
|
||||
rep *binrep
|
||||
}
|
||||
|
||||
// binrep is an immutable representation for Binutils. It is atomically
|
||||
// replaced on every mutation to provide thread-safe access.
|
||||
type binrep struct {
|
||||
// Commands to invoke.
|
||||
llvmSymbolizer string
|
||||
llvmSymbolizerFound bool
|
||||
addr2line string
|
||||
addr2lineFound bool
|
||||
nm string
|
||||
nmFound bool
|
||||
objdump string
|
||||
objdumpFound bool
|
||||
|
||||
// if fast, perform symbolization using nm (symbol names only),
|
||||
// instead of file-line detail from the slower addr2line.
|
||||
fast bool
|
||||
}
|
||||
|
||||
// get returns the current representation for bu, initializing it if necessary.
|
||||
func (bu *Binutils) get() *binrep {
|
||||
bu.mu.Lock()
|
||||
r := bu.rep
|
||||
if r == nil {
|
||||
r = &binrep{}
|
||||
initTools(r, "")
|
||||
bu.rep = r
|
||||
}
|
||||
bu.mu.Unlock()
|
||||
return r
|
||||
}
|
||||
|
||||
// update modifies the rep for bu via the supplied function.
|
||||
func (bu *Binutils) update(fn func(r *binrep)) {
|
||||
r := &binrep{}
|
||||
bu.mu.Lock()
|
||||
defer bu.mu.Unlock()
|
||||
if bu.rep == nil {
|
||||
initTools(r, "")
|
||||
} else {
|
||||
*r = *bu.rep
|
||||
}
|
||||
fn(r)
|
||||
bu.rep = r
|
||||
}
|
||||
|
||||
// String returns string representation of the binutils state for debug logging.
|
||||
func (bu *Binutils) String() string {
|
||||
r := bu.get()
|
||||
var llvmSymbolizer, addr2line, nm, objdump string
|
||||
if r.llvmSymbolizerFound {
|
||||
llvmSymbolizer = r.llvmSymbolizer
|
||||
}
|
||||
if r.addr2lineFound {
|
||||
addr2line = r.addr2line
|
||||
}
|
||||
if r.nmFound {
|
||||
nm = r.nm
|
||||
}
|
||||
if r.objdumpFound {
|
||||
objdump = r.objdump
|
||||
}
|
||||
return fmt.Sprintf("llvm-symbolizer=%q addr2line=%q nm=%q objdump=%q fast=%t",
|
||||
llvmSymbolizer, addr2line, nm, objdump, r.fast)
|
||||
}
|
||||
|
||||
// SetFastSymbolization sets a toggle that makes binutils use fast
|
||||
// symbolization (using nm), which is much faster than addr2line but
|
||||
// provides only symbol name information (no file/line).
|
||||
func (bu *Binutils) SetFastSymbolization(fast bool) {
|
||||
bu.update(func(r *binrep) { r.fast = fast })
|
||||
}
|
||||
|
||||
// SetTools processes the contents of the tools option. It
|
||||
// expects a set of entries separated by commas; each entry is a pair
|
||||
// of the form t:path, where cmd will be used to look only for the
|
||||
// tool named t. If t is not specified, the path is searched for all
|
||||
// tools.
|
||||
func (bu *Binutils) SetTools(config string) {
|
||||
bu.update(func(r *binrep) { initTools(r, config) })
|
||||
}
|
||||
|
||||
func initTools(b *binrep, config string) {
|
||||
// paths collect paths per tool; Key "" contains the default.
|
||||
paths := make(map[string][]string)
|
||||
for _, t := range strings.Split(config, ",") {
|
||||
name, path := "", t
|
||||
if ct := strings.SplitN(t, ":", 2); len(ct) == 2 {
|
||||
name, path = ct[0], ct[1]
|
||||
}
|
||||
paths[name] = append(paths[name], path)
|
||||
}
|
||||
|
||||
defaultPath := paths[""]
|
||||
b.llvmSymbolizer, b.llvmSymbolizerFound = findExe("llvm-symbolizer", append(paths["llvm-symbolizer"], defaultPath...))
|
||||
b.addr2line, b.addr2lineFound = findExe("addr2line", append(paths["addr2line"], defaultPath...))
|
||||
if !b.addr2lineFound {
|
||||
// On MacOS, brew installs addr2line under gaddr2line name, so search for
|
||||
// that if the tool is not found by its default name.
|
||||
b.addr2line, b.addr2lineFound = findExe("gaddr2line", append(paths["addr2line"], defaultPath...))
|
||||
}
|
||||
b.nm, b.nmFound = findExe("nm", append(paths["nm"], defaultPath...))
|
||||
b.objdump, b.objdumpFound = findExe("objdump", append(paths["objdump"], defaultPath...))
|
||||
}
|
||||
|
||||
// findExe looks for an executable command on a set of paths.
|
||||
// If it cannot find it, returns cmd.
|
||||
func findExe(cmd string, paths []string) (string, bool) {
|
||||
for _, p := range paths {
|
||||
cp := filepath.Join(p, cmd)
|
||||
if c, err := exec.LookPath(cp); err == nil {
|
||||
return c, true
|
||||
}
|
||||
}
|
||||
return cmd, false
|
||||
}
|
||||
|
||||
// Disasm returns the assembly instructions for the specified address range
|
||||
// of a binary.
|
||||
func (bu *Binutils) Disasm(file string, start, end uint64) ([]plugin.Inst, error) {
|
||||
b := bu.get()
|
||||
cmd := exec.Command(b.objdump, "-d", "-C", "--no-show-raw-insn", "-l",
|
||||
fmt.Sprintf("--start-address=%#x", start),
|
||||
fmt.Sprintf("--stop-address=%#x", end),
|
||||
file)
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%v: %v", cmd.Args, err)
|
||||
}
|
||||
|
||||
return disassemble(out)
|
||||
}
|
||||
|
||||
// Open satisfies the plugin.ObjTool interface.
|
||||
func (bu *Binutils) Open(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
b := bu.get()
|
||||
|
||||
// Make sure file is a supported executable.
|
||||
// This uses magic numbers, mainly to provide better error messages but
|
||||
// it should also help speed.
|
||||
|
||||
if _, err := os.Stat(name); err != nil {
|
||||
// For testing, do not require file name to exist.
|
||||
if strings.Contains(b.addr2line, "testdata/") {
|
||||
return &fileAddr2Line{file: file{b: b, name: name}}, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Read the first 4 bytes of the file.
|
||||
|
||||
f, err := os.Open(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error opening %s: %v", name, err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var header [4]byte
|
||||
if _, err = io.ReadFull(f, header[:]); err != nil {
|
||||
return nil, fmt.Errorf("error reading magic number from %s: %v", name, err)
|
||||
}
|
||||
|
||||
elfMagic := string(header[:])
|
||||
|
||||
// Match against supported file types.
|
||||
if elfMagic == elf.ELFMAG {
|
||||
f, err := b.openELF(name, start, limit, offset)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading ELF file %s: %v", name, err)
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// Mach-O magic numbers can be big or little endian.
|
||||
machoMagicLittle := binary.LittleEndian.Uint32(header[:])
|
||||
machoMagicBig := binary.BigEndian.Uint32(header[:])
|
||||
|
||||
if machoMagicLittle == macho.Magic32 || machoMagicLittle == macho.Magic64 ||
|
||||
machoMagicBig == macho.Magic32 || machoMagicBig == macho.Magic64 {
|
||||
f, err := b.openMachO(name, start, limit, offset)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading Mach-O file %s: %v", name, err)
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
if machoMagicLittle == macho.MagicFat || machoMagicBig == macho.MagicFat {
|
||||
f, err := b.openFatMachO(name, start, limit, offset)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading fat Mach-O file %s: %v", name, err)
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("unrecognized binary format: %s", name)
|
||||
}
|
||||
|
||||
func (b *binrep) openMachOCommon(name string, of *macho.File, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
|
||||
// Subtract the load address of the __TEXT section. Usually 0 for shared
|
||||
// libraries or 0x100000000 for executables. You can check this value by
|
||||
// running `objdump -private-headers <file>`.
|
||||
|
||||
textSegment := of.Segment("__TEXT")
|
||||
if textSegment == nil {
|
||||
return nil, fmt.Errorf("could not identify base for %s: no __TEXT segment", name)
|
||||
}
|
||||
if textSegment.Addr > start {
|
||||
return nil, fmt.Errorf("could not identify base for %s: __TEXT segment address (0x%x) > mapping start address (0x%x)",
|
||||
name, textSegment.Addr, start)
|
||||
}
|
||||
|
||||
base := start - textSegment.Addr
|
||||
|
||||
if b.fast || (!b.addr2lineFound && !b.llvmSymbolizerFound) {
|
||||
return &fileNM{file: file{b: b, name: name, base: base}}, nil
|
||||
}
|
||||
return &fileAddr2Line{file: file{b: b, name: name, base: base}}, nil
|
||||
}
|
||||
|
||||
func (b *binrep) openFatMachO(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
of, err := macho.OpenFat(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing %s: %v", name, err)
|
||||
}
|
||||
defer of.Close()
|
||||
|
||||
if len(of.Arches) == 0 {
|
||||
return nil, fmt.Errorf("empty fat Mach-O file: %s", name)
|
||||
}
|
||||
|
||||
var arch macho.Cpu
|
||||
// Use the host architecture.
|
||||
// TODO: This is not ideal because the host architecture may not be the one
|
||||
// that was profiled. E.g. an amd64 host can profile a 386 program.
|
||||
switch runtime.GOARCH {
|
||||
case "386":
|
||||
arch = macho.Cpu386
|
||||
case "amd64", "amd64p32":
|
||||
arch = macho.CpuAmd64
|
||||
case "arm", "armbe", "arm64", "arm64be":
|
||||
arch = macho.CpuArm
|
||||
case "ppc":
|
||||
arch = macho.CpuPpc
|
||||
case "ppc64", "ppc64le":
|
||||
arch = macho.CpuPpc64
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported host architecture for %s: %s", name, runtime.GOARCH)
|
||||
}
|
||||
for i := range of.Arches {
|
||||
if of.Arches[i].Cpu == arch {
|
||||
return b.openMachOCommon(name, of.Arches[i].File, start, limit, offset)
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("architecture not found in %s: %s", name, runtime.GOARCH)
|
||||
}
|
||||
|
||||
func (b *binrep) openMachO(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
of, err := macho.Open(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing %s: %v", name, err)
|
||||
}
|
||||
defer of.Close()
|
||||
|
||||
return b.openMachOCommon(name, of, start, limit, offset)
|
||||
}
|
||||
|
||||
func (b *binrep) openELF(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
ef, err := elf.Open(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing %s: %v", name, err)
|
||||
}
|
||||
defer ef.Close()
|
||||
|
||||
var stextOffset *uint64
|
||||
var pageAligned = func(addr uint64) bool { return addr%4096 == 0 }
|
||||
if strings.Contains(name, "vmlinux") || !pageAligned(start) || !pageAligned(limit) || !pageAligned(offset) {
|
||||
// Reading all Symbols is expensive, and we only rarely need it so
|
||||
// we don't want to do it every time. But if _stext happens to be
|
||||
// page-aligned but isn't the same as Vaddr, we would symbolize
|
||||
// wrong. So if the name the addresses aren't page aligned, or if
|
||||
// the name is "vmlinux" we read _stext. We can be wrong if: (1)
|
||||
// someone passes a kernel path that doesn't contain "vmlinux" AND
|
||||
// (2) _stext is page-aligned AND (3) _stext is not at Vaddr
|
||||
symbols, err := ef.Symbols()
|
||||
if err != nil && err != elf.ErrNoSymbols {
|
||||
return nil, err
|
||||
}
|
||||
for _, s := range symbols {
|
||||
if s.Name == "_stext" {
|
||||
// The kernel may use _stext as the mapping start address.
|
||||
stextOffset = &s.Value
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
base, err := elfexec.GetBase(&ef.FileHeader, elfexec.FindTextProgHeader(ef), stextOffset, start, limit, offset)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not identify base for %s: %v", name, err)
|
||||
}
|
||||
|
||||
buildID := ""
|
||||
if f, err := os.Open(name); err == nil {
|
||||
if id, err := elfexec.GetBuildID(f); err == nil {
|
||||
buildID = fmt.Sprintf("%x", id)
|
||||
}
|
||||
}
|
||||
if b.fast || (!b.addr2lineFound && !b.llvmSymbolizerFound) {
|
||||
return &fileNM{file: file{b, name, base, buildID}}, nil
|
||||
}
|
||||
return &fileAddr2Line{file: file{b, name, base, buildID}}, nil
|
||||
}
|
||||
|
||||
// file implements the binutils.ObjFile interface.
|
||||
type file struct {
|
||||
b *binrep
|
||||
name string
|
||||
base uint64
|
||||
buildID string
|
||||
}
|
||||
|
||||
func (f *file) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
func (f *file) Base() uint64 {
|
||||
return f.base
|
||||
}
|
||||
|
||||
func (f *file) BuildID() string {
|
||||
return f.buildID
|
||||
}
|
||||
|
||||
func (f *file) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
return []plugin.Frame{}, nil
|
||||
}
|
||||
|
||||
func (f *file) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *file) Symbols(r *regexp.Regexp, addr uint64) ([]*plugin.Sym, error) {
|
||||
// Get from nm a list of symbols sorted by address.
|
||||
cmd := exec.Command(f.b.nm, "-n", f.name)
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%v: %v", cmd.Args, err)
|
||||
}
|
||||
|
||||
return findSymbols(out, f.name, r, addr)
|
||||
}
|
||||
|
||||
// fileNM implements the binutils.ObjFile interface, using 'nm' to map
|
||||
// addresses to symbols (without file/line number information). It is
|
||||
// faster than fileAddr2Line.
|
||||
type fileNM struct {
|
||||
file
|
||||
addr2linernm *addr2LinerNM
|
||||
}
|
||||
|
||||
func (f *fileNM) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
if f.addr2linernm == nil {
|
||||
addr2liner, err := newAddr2LinerNM(f.b.nm, f.name, f.base)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f.addr2linernm = addr2liner
|
||||
}
|
||||
return f.addr2linernm.addrInfo(addr)
|
||||
}
|
||||
|
||||
// fileAddr2Line implements the binutils.ObjFile interface, using
|
||||
// llvm-symbolizer, if that's available, or addr2line to map addresses to
|
||||
// symbols (with file/line number information). It can be slow for large
|
||||
// binaries with debug information.
|
||||
type fileAddr2Line struct {
|
||||
once sync.Once
|
||||
file
|
||||
addr2liner *addr2Liner
|
||||
llvmSymbolizer *llvmSymbolizer
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
f.once.Do(f.init)
|
||||
if f.llvmSymbolizer != nil {
|
||||
return f.llvmSymbolizer.addrInfo(addr)
|
||||
}
|
||||
if f.addr2liner != nil {
|
||||
return f.addr2liner.addrInfo(addr)
|
||||
}
|
||||
return nil, fmt.Errorf("could not find local addr2liner")
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) init() {
|
||||
if llvmSymbolizer, err := newLLVMSymbolizer(f.b.llvmSymbolizer, f.name, f.base); err == nil {
|
||||
f.llvmSymbolizer = llvmSymbolizer
|
||||
return
|
||||
}
|
||||
|
||||
if addr2liner, err := newAddr2Liner(f.b.addr2line, f.name, f.base); err == nil {
|
||||
f.addr2liner = addr2liner
|
||||
|
||||
// When addr2line encounters some gcc compiled binaries, it
|
||||
// drops interesting parts of names in anonymous namespaces.
|
||||
// Fallback to NM for better function names.
|
||||
if nm, err := newAddr2LinerNM(f.b.nm, f.name, f.base); err == nil {
|
||||
f.addr2liner.nm = nm
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) Close() error {
|
||||
if f.llvmSymbolizer != nil {
|
||||
f.llvmSymbolizer.rw.close()
|
||||
f.llvmSymbolizer = nil
|
||||
}
|
||||
if f.addr2liner != nil {
|
||||
f.addr2liner.rw.close()
|
||||
f.addr2liner = nil
|
||||
}
|
||||
return nil
|
||||
}
|
398
vendor/github.com/google/pprof/internal/binutils/binutils_test.go
generated
vendored
Normal file
398
vendor/github.com/google/pprof/internal/binutils/binutils_test.go
generated
vendored
Normal file
@@ -0,0 +1,398 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
var testAddrMap = map[int]string{
|
||||
1000: "_Z3fooid.clone2",
|
||||
2000: "_ZNSaIiEC1Ev.clone18",
|
||||
3000: "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm",
|
||||
}
|
||||
|
||||
func functionName(level int) (name string) {
|
||||
if name = testAddrMap[level]; name != "" {
|
||||
return name
|
||||
}
|
||||
return fmt.Sprintf("fun%d", level)
|
||||
}
|
||||
|
||||
func TestAddr2Liner(t *testing.T) {
|
||||
const offset = 0x500
|
||||
|
||||
a := addr2Liner{rw: &mockAddr2liner{}, base: offset}
|
||||
for i := 1; i < 8; i++ {
|
||||
addr := i*0x1000 + offset
|
||||
s, err := a.addrInfo(uint64(addr))
|
||||
if err != nil {
|
||||
t.Fatalf("addrInfo(%#x): %v", addr, err)
|
||||
}
|
||||
if len(s) != i {
|
||||
t.Fatalf("addrInfo(%#x): got len==%d, want %d", addr, len(s), i)
|
||||
}
|
||||
for l, f := range s {
|
||||
level := (len(s) - l) * 1000
|
||||
want := plugin.Frame{Func: functionName(level), File: fmt.Sprintf("file%d", level), Line: level}
|
||||
|
||||
if f != want {
|
||||
t.Errorf("AddrInfo(%#x)[%d]: = %+v, want %+v", addr, l, f, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
s, err := a.addrInfo(0xFFFF)
|
||||
if err != nil {
|
||||
t.Fatalf("addrInfo(0xFFFF): %v", err)
|
||||
}
|
||||
if len(s) != 0 {
|
||||
t.Fatalf("AddrInfo(0xFFFF): got len==%d, want 0", len(s))
|
||||
}
|
||||
a.rw.close()
|
||||
}
|
||||
|
||||
type mockAddr2liner struct {
|
||||
output []string
|
||||
}
|
||||
|
||||
func (a *mockAddr2liner) write(s string) error {
|
||||
var lines []string
|
||||
switch s {
|
||||
case "1000":
|
||||
lines = []string{"_Z3fooid.clone2", "file1000:1000"}
|
||||
case "2000":
|
||||
lines = []string{"_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "3000":
|
||||
lines = []string{"_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "4000":
|
||||
lines = []string{"fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "5000":
|
||||
lines = []string{"fun5000", "file5000:5000", "fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "6000":
|
||||
lines = []string{"fun6000", "file6000:6000", "fun5000", "file5000:5000", "fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "7000":
|
||||
lines = []string{"fun7000", "file7000:7000", "fun6000", "file6000:6000", "fun5000", "file5000:5000", "fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "8000":
|
||||
lines = []string{"fun8000", "file8000:8000", "fun7000", "file7000:7000", "fun6000", "file6000:6000", "fun5000", "file5000:5000", "fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
case "9000":
|
||||
lines = []string{"fun9000", "file9000:9000", "fun8000", "file8000:8000", "fun7000", "file7000:7000", "fun6000", "file6000:6000", "fun5000", "file5000:5000", "fun4000", "file4000:4000", "_ZNSt6vectorIS_IS_IiSaIiEESaIS1_EESaIS3_EEixEm", "file3000:3000", "_ZNSaIiEC1Ev.clone18", "file2000:2000", "_Z3fooid.clone2", "file1000:1000"}
|
||||
default:
|
||||
lines = []string{"??", "??:0"}
|
||||
}
|
||||
a.output = append(a.output, "0x"+s)
|
||||
a.output = append(a.output, lines...)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *mockAddr2liner) readLine() (string, error) {
|
||||
if len(a.output) == 0 {
|
||||
return "", fmt.Errorf("end of file")
|
||||
}
|
||||
next := a.output[0]
|
||||
a.output = a.output[1:]
|
||||
return next, nil
|
||||
}
|
||||
|
||||
func (a *mockAddr2liner) close() {
|
||||
}
|
||||
|
||||
func TestAddr2LinerLookup(t *testing.T) {
|
||||
const oddSizedData = `
|
||||
00001000 T 0x1000
|
||||
00002000 T 0x2000
|
||||
00003000 T 0x3000
|
||||
`
|
||||
const evenSizedData = `
|
||||
0000000000001000 T 0x1000
|
||||
0000000000002000 T 0x2000
|
||||
0000000000003000 T 0x3000
|
||||
0000000000004000 T 0x4000
|
||||
`
|
||||
for _, d := range []string{oddSizedData, evenSizedData} {
|
||||
a, err := parseAddr2LinerNM(0, bytes.NewBufferString(d))
|
||||
if err != nil {
|
||||
t.Errorf("nm parse error: %v", err)
|
||||
continue
|
||||
}
|
||||
for address, want := range map[uint64]string{
|
||||
0x1000: "0x1000",
|
||||
0x1001: "0x1000",
|
||||
0x1FFF: "0x1000",
|
||||
0x2000: "0x2000",
|
||||
0x2001: "0x2000",
|
||||
} {
|
||||
if got, _ := a.addrInfo(address); !checkAddress(got, address, want) {
|
||||
t.Errorf("%x: got %v, want %s", address, got, want)
|
||||
}
|
||||
}
|
||||
for _, unknown := range []uint64{0x0fff, 0x4001} {
|
||||
if got, _ := a.addrInfo(unknown); got != nil {
|
||||
t.Errorf("%x: got %v, want nil", unknown, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func checkAddress(got []plugin.Frame, address uint64, want string) bool {
|
||||
if len(got) != 1 {
|
||||
return false
|
||||
}
|
||||
return got[0].Func == want
|
||||
}
|
||||
|
||||
func TestSetTools(t *testing.T) {
|
||||
// Test that multiple calls work.
|
||||
bu := &Binutils{}
|
||||
bu.SetTools("")
|
||||
bu.SetTools("")
|
||||
}
|
||||
|
||||
func TestSetFastSymbolization(t *testing.T) {
|
||||
// Test that multiple calls work.
|
||||
bu := &Binutils{}
|
||||
bu.SetFastSymbolization(true)
|
||||
bu.SetFastSymbolization(false)
|
||||
}
|
||||
|
||||
func skipUnlessLinuxAmd64(t *testing.T) {
|
||||
if runtime.GOOS != "linux" || runtime.GOARCH != "amd64" {
|
||||
t.Skip("This test only works on x86-64 Linux")
|
||||
}
|
||||
}
|
||||
|
||||
func skipUnlessDarwinAmd64(t *testing.T) {
|
||||
if runtime.GOOS != "darwin" || runtime.GOARCH != "amd64" {
|
||||
t.Skip("This test only works on x86-64 Mac")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDisasm(t *testing.T) {
|
||||
skipUnlessLinuxAmd64(t)
|
||||
bu := &Binutils{}
|
||||
insts, err := bu.Disasm(filepath.Join("testdata", "exe_linux_64"), 0, math.MaxUint64)
|
||||
if err != nil {
|
||||
t.Fatalf("Disasm: unexpected error %v", err)
|
||||
}
|
||||
mainCount := 0
|
||||
for _, x := range insts {
|
||||
if x.Function == "main" {
|
||||
mainCount++
|
||||
}
|
||||
}
|
||||
if mainCount == 0 {
|
||||
t.Error("Disasm: found no main instructions")
|
||||
}
|
||||
}
|
||||
|
||||
func findSymbol(syms []*plugin.Sym, name string) *plugin.Sym {
|
||||
for _, s := range syms {
|
||||
for _, n := range s.Name {
|
||||
if n == name {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestObjFile(t *testing.T) {
|
||||
// If this test fails, check the address for main function in testdata/exe_linux_64
|
||||
// using the command 'nm -n '. Update the hardcoded addresses below to match
|
||||
// the addresses from the output.
|
||||
skipUnlessLinuxAmd64(t)
|
||||
for _, tc := range []struct {
|
||||
desc string
|
||||
start, limit, offset uint64
|
||||
addr uint64
|
||||
}{
|
||||
{"fake mapping", 0, math.MaxUint64, 0, 0x40052d},
|
||||
{"fixed load address", 0x400000, 0x4006fc, 0, 0x40052d},
|
||||
// True user-mode ASLR binaries are ET_DYN rather than ET_EXEC so this case
|
||||
// is a bit artificial except that it approximates the
|
||||
// vmlinux-with-kernel-ASLR case where the binary *is* ET_EXEC.
|
||||
{"simulated ASLR address", 0x500000, 0x5006fc, 0, 0x50052d},
|
||||
} {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
bu := &Binutils{}
|
||||
f, err := bu.Open(filepath.Join("testdata", "exe_linux_64"), tc.start, tc.limit, tc.offset)
|
||||
if err != nil {
|
||||
t.Fatalf("Open: unexpected error %v", err)
|
||||
}
|
||||
defer f.Close()
|
||||
syms, err := f.Symbols(regexp.MustCompile("main"), 0)
|
||||
if err != nil {
|
||||
t.Fatalf("Symbols: unexpected error %v", err)
|
||||
}
|
||||
|
||||
m := findSymbol(syms, "main")
|
||||
if m == nil {
|
||||
t.Fatalf("Symbols: did not find main")
|
||||
}
|
||||
for _, addr := range []uint64{m.Start + f.Base(), tc.addr} {
|
||||
gotFrames, err := f.SourceLine(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("SourceLine: unexpected error %v", err)
|
||||
}
|
||||
wantFrames := []plugin.Frame{
|
||||
{Func: "main", File: "/tmp/hello.c", Line: 3},
|
||||
}
|
||||
if !reflect.DeepEqual(gotFrames, wantFrames) {
|
||||
t.Fatalf("SourceLine for main: got %v; want %v\n", gotFrames, wantFrames)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMachoFiles(t *testing.T) {
|
||||
// If this test fails, check the address for main function in testdata/exe_mac_64
|
||||
// and testdata/lib_mac_64 using addr2line or gaddr2line. Update the
|
||||
// hardcoded addresses below to match the addresses from the output.
|
||||
skipUnlessDarwinAmd64(t)
|
||||
|
||||
// Load `file`, pretending it was mapped at `start`. Then get the symbol
|
||||
// table. Check that it contains the symbol `sym` and that the address
|
||||
// `addr` gives the `expected` stack trace.
|
||||
for _, tc := range []struct {
|
||||
desc string
|
||||
file string
|
||||
start, limit, offset uint64
|
||||
addr uint64
|
||||
sym string
|
||||
expected []plugin.Frame
|
||||
}{
|
||||
{"normal mapping", "exe_mac_64", 0x100000000, math.MaxUint64, 0,
|
||||
0x100000f50, "_main",
|
||||
[]plugin.Frame{
|
||||
{Func: "main", File: "/tmp/hello.c", Line: 3},
|
||||
}},
|
||||
{"other mapping", "exe_mac_64", 0x200000000, math.MaxUint64, 0,
|
||||
0x200000f50, "_main",
|
||||
[]plugin.Frame{
|
||||
{Func: "main", File: "/tmp/hello.c", Line: 3},
|
||||
}},
|
||||
{"lib normal mapping", "lib_mac_64", 0, math.MaxUint64, 0,
|
||||
0xfa0, "_bar",
|
||||
[]plugin.Frame{
|
||||
{Func: "bar", File: "/tmp/lib.c", Line: 5},
|
||||
}},
|
||||
} {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
bu := &Binutils{}
|
||||
f, err := bu.Open(filepath.Join("testdata", tc.file), tc.start, tc.limit, tc.offset)
|
||||
if err != nil {
|
||||
t.Fatalf("Open: unexpected error %v", err)
|
||||
}
|
||||
t.Logf("binutils: %v", bu)
|
||||
if runtime.GOOS == "darwin" && !bu.rep.addr2lineFound && !bu.rep.llvmSymbolizerFound {
|
||||
// On OSX user needs to install gaddr2line or llvm-symbolizer with
|
||||
// Homebrew, skip the test when the environment doesn't have it
|
||||
// installed.
|
||||
t.Skip("couldn't find addr2line or gaddr2line")
|
||||
}
|
||||
defer f.Close()
|
||||
syms, err := f.Symbols(nil, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("Symbols: unexpected error %v", err)
|
||||
}
|
||||
|
||||
m := findSymbol(syms, tc.sym)
|
||||
if m == nil {
|
||||
t.Fatalf("Symbols: could not find symbol %v", tc.sym)
|
||||
}
|
||||
gotFrames, err := f.SourceLine(tc.addr)
|
||||
if err != nil {
|
||||
t.Fatalf("SourceLine: unexpected error %v", err)
|
||||
}
|
||||
if !reflect.DeepEqual(gotFrames, tc.expected) {
|
||||
t.Fatalf("SourceLine for main: got %v; want %v\n", gotFrames, tc.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestLLVMSymbolizer(t *testing.T) {
|
||||
if runtime.GOOS != "linux" {
|
||||
t.Skip("testtdata/llvm-symbolizer has only been tested on linux")
|
||||
}
|
||||
|
||||
cmd := filepath.Join("testdata", "fake-llvm-symbolizer")
|
||||
symbolizer, err := newLLVMSymbolizer(cmd, "foo", 0)
|
||||
if err != nil {
|
||||
t.Fatalf("newLLVMSymbolizer: unexpected error %v", err)
|
||||
}
|
||||
defer symbolizer.rw.close()
|
||||
|
||||
for _, c := range []struct {
|
||||
addr uint64
|
||||
frames []plugin.Frame
|
||||
}{
|
||||
{0x10, []plugin.Frame{
|
||||
{Func: "Inlined_0x10", File: "foo.h", Line: 0},
|
||||
{Func: "Func_0x10", File: "foo.c", Line: 2},
|
||||
}},
|
||||
{0x20, []plugin.Frame{
|
||||
{Func: "Inlined_0x20", File: "foo.h", Line: 0},
|
||||
{Func: "Func_0x20", File: "foo.c", Line: 2},
|
||||
}},
|
||||
} {
|
||||
frames, err := symbolizer.addrInfo(c.addr)
|
||||
if err != nil {
|
||||
t.Errorf("LLVM: unexpected error %v", err)
|
||||
continue
|
||||
}
|
||||
if !reflect.DeepEqual(frames, c.frames) {
|
||||
t.Errorf("LLVM: expect %v; got %v\n", c.frames, frames)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestOpenMalformedELF(t *testing.T) {
|
||||
// Test that opening a malformed ELF file will report an error containing
|
||||
// the word "ELF".
|
||||
bu := &Binutils{}
|
||||
_, err := bu.Open(filepath.Join("testdata", "malformed_elf"), 0, 0, 0)
|
||||
if err == nil {
|
||||
t.Fatalf("Open: unexpected success")
|
||||
}
|
||||
|
||||
if !strings.Contains(err.Error(), "ELF") {
|
||||
t.Errorf("Open: got %v, want error containing 'ELF'", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestOpenMalformedMachO(t *testing.T) {
|
||||
// Test that opening a malformed Mach-O file will report an error containing
|
||||
// the word "Mach-O".
|
||||
bu := &Binutils{}
|
||||
_, err := bu.Open(filepath.Join("testdata", "malformed_macho"), 0, 0, 0)
|
||||
if err == nil {
|
||||
t.Fatalf("Open: unexpected success")
|
||||
}
|
||||
|
||||
if !strings.Contains(err.Error(), "Mach-O") {
|
||||
t.Errorf("Open: got %v, want error containing 'Mach-O'", err)
|
||||
}
|
||||
}
|
171
vendor/github.com/google/pprof/internal/binutils/disasm.go
generated
vendored
Normal file
171
vendor/github.com/google/pprof/internal/binutils/disasm.go
generated
vendored
Normal file
@@ -0,0 +1,171 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/ianlancetaylor/demangle"
|
||||
)
|
||||
|
||||
var (
|
||||
nmOutputRE = regexp.MustCompile(`^\s*([[:xdigit:]]+)\s+(.)\s+(.*)`)
|
||||
objdumpAsmOutputRE = regexp.MustCompile(`^\s*([[:xdigit:]]+):\s+(.*)`)
|
||||
objdumpOutputFileLine = regexp.MustCompile(`^(.*):([0-9]+)`)
|
||||
objdumpOutputFunction = regexp.MustCompile(`^(\S.*)\(\):`)
|
||||
)
|
||||
|
||||
func findSymbols(syms []byte, file string, r *regexp.Regexp, address uint64) ([]*plugin.Sym, error) {
|
||||
// Collect all symbols from the nm output, grouping names mapped to
|
||||
// the same address into a single symbol.
|
||||
|
||||
// The symbols to return.
|
||||
var symbols []*plugin.Sym
|
||||
|
||||
// The current group of symbol names, and the address they are all at.
|
||||
names, start := []string{}, uint64(0)
|
||||
|
||||
buf := bytes.NewBuffer(syms)
|
||||
|
||||
for {
|
||||
symAddr, name, err := nextSymbol(buf)
|
||||
if err == io.EOF {
|
||||
// Done. If there was an unfinished group, append it.
|
||||
if len(names) != 0 {
|
||||
if match := matchSymbol(names, start, symAddr-1, r, address); match != nil {
|
||||
symbols = append(symbols, &plugin.Sym{Name: match, File: file, Start: start, End: symAddr - 1})
|
||||
}
|
||||
}
|
||||
|
||||
// And return the symbols.
|
||||
return symbols, nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// There was some kind of serious error reading nm's output.
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If this symbol is at the same address as the current group, add it to the group.
|
||||
if symAddr == start {
|
||||
names = append(names, name)
|
||||
continue
|
||||
}
|
||||
|
||||
// Otherwise append the current group to the list of symbols.
|
||||
if match := matchSymbol(names, start, symAddr-1, r, address); match != nil {
|
||||
symbols = append(symbols, &plugin.Sym{Name: match, File: file, Start: start, End: symAddr - 1})
|
||||
}
|
||||
|
||||
// And start a new group.
|
||||
names, start = []string{name}, symAddr
|
||||
}
|
||||
}
|
||||
|
||||
// matchSymbol checks if a symbol is to be selected by checking its
|
||||
// name to the regexp and optionally its address. It returns the name(s)
|
||||
// to be used for the matched symbol, or nil if no match
|
||||
func matchSymbol(names []string, start, end uint64, r *regexp.Regexp, address uint64) []string {
|
||||
if address != 0 && address >= start && address <= end {
|
||||
return names
|
||||
}
|
||||
for _, name := range names {
|
||||
if r == nil || r.MatchString(name) {
|
||||
return []string{name}
|
||||
}
|
||||
|
||||
// Match all possible demangled versions of the name.
|
||||
for _, o := range [][]demangle.Option{
|
||||
{demangle.NoClones},
|
||||
{demangle.NoParams},
|
||||
{demangle.NoParams, demangle.NoTemplateParams},
|
||||
} {
|
||||
if demangled, err := demangle.ToString(name, o...); err == nil && r.MatchString(demangled) {
|
||||
return []string{demangled}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// disassemble parses the output of the objdump command and returns
|
||||
// the assembly instructions in a slice.
|
||||
func disassemble(asm []byte) ([]plugin.Inst, error) {
|
||||
buf := bytes.NewBuffer(asm)
|
||||
function, file, line := "", "", 0
|
||||
var assembly []plugin.Inst
|
||||
for {
|
||||
input, err := buf.ReadString('\n')
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if input == "" {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if fields := objdumpAsmOutputRE.FindStringSubmatch(input); len(fields) == 3 {
|
||||
if address, err := strconv.ParseUint(fields[1], 16, 64); err == nil {
|
||||
assembly = append(assembly,
|
||||
plugin.Inst{
|
||||
Addr: address,
|
||||
Text: fields[2],
|
||||
Function: function,
|
||||
File: file,
|
||||
Line: line,
|
||||
})
|
||||
continue
|
||||
}
|
||||
}
|
||||
if fields := objdumpOutputFileLine.FindStringSubmatch(input); len(fields) == 3 {
|
||||
if l, err := strconv.ParseUint(fields[2], 10, 32); err == nil {
|
||||
file, line = fields[1], int(l)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if fields := objdumpOutputFunction.FindStringSubmatch(input); len(fields) == 2 {
|
||||
function = fields[1]
|
||||
continue
|
||||
}
|
||||
// Reset on unrecognized lines.
|
||||
function, file, line = "", "", 0
|
||||
}
|
||||
|
||||
return assembly, nil
|
||||
}
|
||||
|
||||
// nextSymbol parses the nm output to find the next symbol listed.
|
||||
// Skips over any output it cannot recognize.
|
||||
func nextSymbol(buf *bytes.Buffer) (uint64, string, error) {
|
||||
for {
|
||||
line, err := buf.ReadString('\n')
|
||||
if err != nil {
|
||||
if err != io.EOF || line == "" {
|
||||
return 0, "", err
|
||||
}
|
||||
}
|
||||
|
||||
if fields := nmOutputRE.FindStringSubmatch(line); len(fields) == 4 {
|
||||
if address, err := strconv.ParseUint(fields[1], 16, 64); err == nil {
|
||||
return address, fields[3], nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
152
vendor/github.com/google/pprof/internal/binutils/disasm_test.go
generated
vendored
Normal file
152
vendor/github.com/google/pprof/internal/binutils/disasm_test.go
generated
vendored
Normal file
@@ -0,0 +1,152 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
// TestFindSymbols tests the FindSymbols routine using a hardcoded nm output.
|
||||
func TestFindSymbols(t *testing.T) {
|
||||
type testcase struct {
|
||||
query, syms string
|
||||
want []plugin.Sym
|
||||
}
|
||||
|
||||
testsyms := `0000000000001000 t lineA001
|
||||
0000000000001000 t lineA002
|
||||
0000000000001000 t line1000
|
||||
0000000000002000 t line200A
|
||||
0000000000002000 t line2000
|
||||
0000000000002000 t line200B
|
||||
0000000000003000 t line3000
|
||||
0000000000003000 t _ZNK4DumbclEPKc
|
||||
0000000000003000 t lineB00C
|
||||
0000000000003000 t line300D
|
||||
0000000000004000 t _the_end
|
||||
`
|
||||
testcases := []testcase{
|
||||
{
|
||||
"line.*[AC]",
|
||||
testsyms,
|
||||
[]plugin.Sym{
|
||||
{Name: []string{"lineA001"}, File: "object.o", Start: 0x1000, End: 0x1FFF},
|
||||
{Name: []string{"line200A"}, File: "object.o", Start: 0x2000, End: 0x2FFF},
|
||||
{Name: []string{"lineB00C"}, File: "object.o", Start: 0x3000, End: 0x3FFF},
|
||||
},
|
||||
},
|
||||
{
|
||||
"Dumb::operator",
|
||||
testsyms,
|
||||
[]plugin.Sym{
|
||||
{Name: []string{"Dumb::operator()(char const*) const"}, File: "object.o", Start: 0x3000, End: 0x3FFF},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
syms, err := findSymbols([]byte(tc.syms), "object.o", regexp.MustCompile(tc.query), 0)
|
||||
if err != nil {
|
||||
t.Fatalf("%q: findSymbols: %v", tc.query, err)
|
||||
}
|
||||
if err := checkSymbol(syms, tc.want); err != nil {
|
||||
t.Errorf("%q: %v", tc.query, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func checkSymbol(got []*plugin.Sym, want []plugin.Sym) error {
|
||||
if len(got) != len(want) {
|
||||
return fmt.Errorf("unexpected number of symbols %d (want %d)", len(got), len(want))
|
||||
}
|
||||
|
||||
for i, g := range got {
|
||||
w := want[i]
|
||||
if len(g.Name) != len(w.Name) {
|
||||
return fmt.Errorf("names, got %d, want %d", len(g.Name), len(w.Name))
|
||||
}
|
||||
for n := range g.Name {
|
||||
if g.Name[n] != w.Name[n] {
|
||||
return fmt.Errorf("name %d, got %q, want %q", n, g.Name[n], w.Name[n])
|
||||
}
|
||||
}
|
||||
if g.File != w.File {
|
||||
return fmt.Errorf("filename, got %q, want %q", g.File, w.File)
|
||||
}
|
||||
if g.Start != w.Start {
|
||||
return fmt.Errorf("start address, got %#x, want %#x", g.Start, w.Start)
|
||||
}
|
||||
if g.End != w.End {
|
||||
return fmt.Errorf("end address, got %#x, want %#x", g.End, w.End)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TestFunctionAssembly tests the FunctionAssembly routine by using a
|
||||
// fake objdump script.
|
||||
func TestFunctionAssembly(t *testing.T) {
|
||||
type testcase struct {
|
||||
s plugin.Sym
|
||||
asm string
|
||||
want []plugin.Inst
|
||||
}
|
||||
testcases := []testcase{
|
||||
{
|
||||
plugin.Sym{Name: []string{"symbol1"}, Start: 0x1000, End: 0x1FFF},
|
||||
` 1000: instruction one
|
||||
1001: instruction two
|
||||
1002: instruction three
|
||||
1003: instruction four
|
||||
`,
|
||||
[]plugin.Inst{
|
||||
{Addr: 0x1000, Text: "instruction one"},
|
||||
{Addr: 0x1001, Text: "instruction two"},
|
||||
{Addr: 0x1002, Text: "instruction three"},
|
||||
{Addr: 0x1003, Text: "instruction four"},
|
||||
},
|
||||
},
|
||||
{
|
||||
plugin.Sym{Name: []string{"symbol2"}, Start: 0x2000, End: 0x2FFF},
|
||||
` 2000: instruction one
|
||||
2001: instruction two
|
||||
`,
|
||||
[]plugin.Inst{
|
||||
{Addr: 0x2000, Text: "instruction one"},
|
||||
{Addr: 0x2001, Text: "instruction two"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
insts, err := disassemble([]byte(tc.asm))
|
||||
if err != nil {
|
||||
t.Fatalf("FunctionAssembly: %v", err)
|
||||
}
|
||||
|
||||
if len(insts) != len(tc.want) {
|
||||
t.Errorf("Unexpected number of assembly instructions %d (want %d)\n", len(insts), len(tc.want))
|
||||
}
|
||||
for i := range insts {
|
||||
if insts[i] != tc.want[i] {
|
||||
t.Errorf("Expected symbol %v, got %v\n", tc.want[i], insts[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
58
vendor/github.com/google/pprof/internal/binutils/testdata/build_binaries.sh
generated
vendored
Executable file
58
vendor/github.com/google/pprof/internal/binutils/testdata/build_binaries.sh
generated
vendored
Executable file
@@ -0,0 +1,58 @@
|
||||
#!/bin/bash -x
|
||||
|
||||
# Copyright 2019 Google Inc. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This is a script that generates the test executables for MacOS and Linux
|
||||
# in this directory. It should be needed very rarely to run this script.
|
||||
# It is mostly provided as a future reference on how the original binary
|
||||
# set was created.
|
||||
|
||||
# When a new executable is generated, hardcoded addresses in the
|
||||
# functions TestObjFile, TestMachoFiles in binutils_test.go must be updated.
|
||||
|
||||
set -o errexit
|
||||
|
||||
cat <<EOF >/tmp/hello.c
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
printf("Hello, world!\n");
|
||||
return 0;
|
||||
}
|
||||
EOF
|
||||
|
||||
cd $(dirname $0)
|
||||
|
||||
if [[ "$OSTYPE" == "linux-gnu" ]]; then
|
||||
rm -rf exe_linux_64*
|
||||
cc -g -o exe_linux_64 /tmp/hello.c
|
||||
elif [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
cat <<EOF >/tmp/lib.c
|
||||
int foo() {
|
||||
return 1;
|
||||
}
|
||||
|
||||
int bar() {
|
||||
return 2;
|
||||
}
|
||||
EOF
|
||||
|
||||
rm -rf exe_mac_64* lib_mac_64*
|
||||
clang -g -o exe_mac_64 /tmp/hello.c
|
||||
clang -g -o lib_mac_64 -dynamiclib /tmp/lib.ca
|
||||
else
|
||||
echo "Unknown OS: $OSTYPE"
|
||||
exit 1
|
||||
fi
|
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_linux_64
generated
vendored
Executable file
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_linux_64
generated
vendored
Executable file
Binary file not shown.
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64
generated
vendored
Executable file
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64
generated
vendored
Executable file
Binary file not shown.
20
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64.dSYM/Contents/Info.plist
generated
vendored
Normal file
20
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64.dSYM/Contents/Info.plist
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>CFBundleDevelopmentRegion</key>
|
||||
<string>English</string>
|
||||
<key>CFBundleIdentifier</key>
|
||||
<string>com.apple.xcode.dsym.exe_mac_64</string>
|
||||
<key>CFBundleInfoDictionaryVersion</key>
|
||||
<string>6.0</string>
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>dSYM</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>1.0</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>1</string>
|
||||
</dict>
|
||||
</plist>
|
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64.dSYM/Contents/Resources/DWARF/exe_mac_64
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/exe_mac_64.dSYM/Contents/Resources/DWARF/exe_mac_64
generated
vendored
Normal file
Binary file not shown.
34
vendor/github.com/google/pprof/internal/binutils/testdata/fake-llvm-symbolizer
generated
vendored
Executable file
34
vendor/github.com/google/pprof/internal/binutils/testdata/fake-llvm-symbolizer
generated
vendored
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# Copyright 2014 Google Inc. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Fake llvm-symbolizer to use in tests
|
||||
|
||||
set -f
|
||||
IFS=" "
|
||||
|
||||
while read line; do
|
||||
# line has form:
|
||||
# filename 0xaddr
|
||||
# Emit dummy output that matches llvm-symbolizer output format.
|
||||
set -- $line
|
||||
fname=$1
|
||||
addr=$2
|
||||
echo "Inlined_$addr"
|
||||
echo "$fname.h"
|
||||
echo "Func_$addr"
|
||||
echo "$fname.c:2"
|
||||
echo
|
||||
done
|
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64
generated
vendored
Executable file
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64
generated
vendored
Executable file
Binary file not shown.
20
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64.dSYM/Contents/Info.plist
generated
vendored
Normal file
20
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64.dSYM/Contents/Info.plist
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>CFBundleDevelopmentRegion</key>
|
||||
<string>English</string>
|
||||
<key>CFBundleIdentifier</key>
|
||||
<string>com.apple.xcode.dsym.lib_mac_64</string>
|
||||
<key>CFBundleInfoDictionaryVersion</key>
|
||||
<string>6.0</string>
|
||||
<key>CFBundlePackageType</key>
|
||||
<string>dSYM</string>
|
||||
<key>CFBundleSignature</key>
|
||||
<string>????</string>
|
||||
<key>CFBundleShortVersionString</key>
|
||||
<string>1.0</string>
|
||||
<key>CFBundleVersion</key>
|
||||
<string>1</string>
|
||||
</dict>
|
||||
</plist>
|
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64.dSYM/Contents/Resources/DWARF/lib_mac_64
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/internal/binutils/testdata/lib_mac_64.dSYM/Contents/Resources/DWARF/lib_mac_64
generated
vendored
Normal file
Binary file not shown.
1
vendor/github.com/google/pprof/internal/binutils/testdata/malformed_elf
generated
vendored
Normal file
1
vendor/github.com/google/pprof/internal/binutils/testdata/malformed_elf
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
ELF<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>
|
1
vendor/github.com/google/pprof/internal/binutils/testdata/malformed_macho
generated
vendored
Normal file
1
vendor/github.com/google/pprof/internal/binutils/testdata/malformed_macho
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>
|
358
vendor/github.com/google/pprof/internal/driver/cli.go
generated
vendored
Normal file
358
vendor/github.com/google/pprof/internal/driver/cli.go
generated
vendored
Normal file
@@ -0,0 +1,358 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/binutils"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
type source struct {
|
||||
Sources []string
|
||||
ExecName string
|
||||
BuildID string
|
||||
Base []string
|
||||
DiffBase bool
|
||||
Normalize bool
|
||||
|
||||
Seconds int
|
||||
Timeout int
|
||||
Symbolize string
|
||||
HTTPHostport string
|
||||
HTTPDisableBrowser bool
|
||||
Comment string
|
||||
}
|
||||
|
||||
// parseFlags parses the command lines through the specified flags package
|
||||
// and returns the source of the profile and optionally the command
|
||||
// for the kind of report to generate (nil for interactive use).
|
||||
func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
flag := o.Flagset
|
||||
// Comparisons.
|
||||
flagDiffBase := flag.StringList("diff_base", "", "Source of base profile for comparison")
|
||||
flagBase := flag.StringList("base", "", "Source of base profile for profile subtraction")
|
||||
// Source options.
|
||||
flagSymbolize := flag.String("symbolize", "", "Options for profile symbolization")
|
||||
flagBuildID := flag.String("buildid", "", "Override build id for first mapping")
|
||||
flagTimeout := flag.Int("timeout", -1, "Timeout in seconds for fetching a profile")
|
||||
flagAddComment := flag.String("add_comment", "", "Annotation string to record in the profile")
|
||||
// CPU profile options
|
||||
flagSeconds := flag.Int("seconds", -1, "Length of time for dynamic profiles")
|
||||
// Heap profile options
|
||||
flagInUseSpace := flag.Bool("inuse_space", false, "Display in-use memory size")
|
||||
flagInUseObjects := flag.Bool("inuse_objects", false, "Display in-use object counts")
|
||||
flagAllocSpace := flag.Bool("alloc_space", false, "Display allocated memory size")
|
||||
flagAllocObjects := flag.Bool("alloc_objects", false, "Display allocated object counts")
|
||||
// Contention profile options
|
||||
flagTotalDelay := flag.Bool("total_delay", false, "Display total delay at each region")
|
||||
flagContentions := flag.Bool("contentions", false, "Display number of delays at each region")
|
||||
flagMeanDelay := flag.Bool("mean_delay", false, "Display mean delay at each region")
|
||||
flagTools := flag.String("tools", os.Getenv("PPROF_TOOLS"), "Path for object tool pathnames")
|
||||
|
||||
flagHTTP := flag.String("http", "", "Present interactive web UI at the specified http host:port")
|
||||
flagNoBrowser := flag.Bool("no_browser", false, "Skip opening a browswer for the interactive web UI")
|
||||
|
||||
// Flags used during command processing
|
||||
installedFlags := installFlags(flag)
|
||||
|
||||
flagCommands := make(map[string]*bool)
|
||||
flagParamCommands := make(map[string]*string)
|
||||
for name, cmd := range pprofCommands {
|
||||
if cmd.hasParam {
|
||||
flagParamCommands[name] = flag.String(name, "", "Generate a report in "+name+" format, matching regexp")
|
||||
} else {
|
||||
flagCommands[name] = flag.Bool(name, false, "Generate a report in "+name+" format")
|
||||
}
|
||||
}
|
||||
|
||||
args := flag.Parse(func() {
|
||||
o.UI.Print(usageMsgHdr +
|
||||
usage(true) +
|
||||
usageMsgSrc +
|
||||
flag.ExtraUsage() +
|
||||
usageMsgVars)
|
||||
})
|
||||
if len(args) == 0 {
|
||||
return nil, nil, errors.New("no profile source specified")
|
||||
}
|
||||
|
||||
var execName string
|
||||
// Recognize first argument as an executable or buildid override.
|
||||
if len(args) > 1 {
|
||||
arg0 := args[0]
|
||||
if file, err := o.Obj.Open(arg0, 0, ^uint64(0), 0); err == nil {
|
||||
file.Close()
|
||||
execName = arg0
|
||||
args = args[1:]
|
||||
} else if *flagBuildID == "" && isBuildID(arg0) {
|
||||
*flagBuildID = arg0
|
||||
args = args[1:]
|
||||
}
|
||||
}
|
||||
|
||||
// Report conflicting options
|
||||
if err := updateFlags(installedFlags); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
cmd, err := outputFormat(flagCommands, flagParamCommands)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if cmd != nil && *flagHTTP != "" {
|
||||
return nil, nil, errors.New("-http is not compatible with an output format on the command line")
|
||||
}
|
||||
|
||||
if *flagNoBrowser && *flagHTTP == "" {
|
||||
return nil, nil, errors.New("-no_browser only makes sense with -http")
|
||||
}
|
||||
|
||||
si := pprofVariables["sample_index"].value
|
||||
si = sampleIndex(flagTotalDelay, si, "delay", "-total_delay", o.UI)
|
||||
si = sampleIndex(flagMeanDelay, si, "delay", "-mean_delay", o.UI)
|
||||
si = sampleIndex(flagContentions, si, "contentions", "-contentions", o.UI)
|
||||
si = sampleIndex(flagInUseSpace, si, "inuse_space", "-inuse_space", o.UI)
|
||||
si = sampleIndex(flagInUseObjects, si, "inuse_objects", "-inuse_objects", o.UI)
|
||||
si = sampleIndex(flagAllocSpace, si, "alloc_space", "-alloc_space", o.UI)
|
||||
si = sampleIndex(flagAllocObjects, si, "alloc_objects", "-alloc_objects", o.UI)
|
||||
pprofVariables.set("sample_index", si)
|
||||
|
||||
if *flagMeanDelay {
|
||||
pprofVariables.set("mean", "true")
|
||||
}
|
||||
|
||||
source := &source{
|
||||
Sources: args,
|
||||
ExecName: execName,
|
||||
BuildID: *flagBuildID,
|
||||
Seconds: *flagSeconds,
|
||||
Timeout: *flagTimeout,
|
||||
Symbolize: *flagSymbolize,
|
||||
HTTPHostport: *flagHTTP,
|
||||
HTTPDisableBrowser: *flagNoBrowser,
|
||||
Comment: *flagAddComment,
|
||||
}
|
||||
|
||||
if err := source.addBaseProfiles(*flagBase, *flagDiffBase); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
normalize := pprofVariables["normalize"].boolValue()
|
||||
if normalize && len(source.Base) == 0 {
|
||||
return nil, nil, errors.New("must have base profile to normalize by")
|
||||
}
|
||||
source.Normalize = normalize
|
||||
|
||||
if bu, ok := o.Obj.(*binutils.Binutils); ok {
|
||||
bu.SetTools(*flagTools)
|
||||
}
|
||||
return source, cmd, nil
|
||||
}
|
||||
|
||||
// addBaseProfiles adds the list of base profiles or diff base profiles to
|
||||
// the source. This function will return an error if both base and diff base
|
||||
// profiles are specified.
|
||||
func (source *source) addBaseProfiles(flagBase, flagDiffBase []*string) error {
|
||||
base, diffBase := dropEmpty(flagBase), dropEmpty(flagDiffBase)
|
||||
if len(base) > 0 && len(diffBase) > 0 {
|
||||
return errors.New("-base and -diff_base flags cannot both be specified")
|
||||
}
|
||||
|
||||
source.Base = base
|
||||
if len(diffBase) > 0 {
|
||||
source.Base, source.DiffBase = diffBase, true
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// dropEmpty list takes a slice of string pointers, and outputs a slice of
|
||||
// non-empty strings associated with the flag.
|
||||
func dropEmpty(list []*string) []string {
|
||||
var l []string
|
||||
for _, s := range list {
|
||||
if *s != "" {
|
||||
l = append(l, *s)
|
||||
}
|
||||
}
|
||||
return l
|
||||
}
|
||||
|
||||
// installFlags creates command line flags for pprof variables.
|
||||
func installFlags(flag plugin.FlagSet) flagsInstalled {
|
||||
f := flagsInstalled{
|
||||
ints: make(map[string]*int),
|
||||
bools: make(map[string]*bool),
|
||||
floats: make(map[string]*float64),
|
||||
strings: make(map[string]*string),
|
||||
}
|
||||
for n, v := range pprofVariables {
|
||||
switch v.kind {
|
||||
case boolKind:
|
||||
if v.group != "" {
|
||||
// Set all radio variables to false to identify conflicts.
|
||||
f.bools[n] = flag.Bool(n, false, v.help)
|
||||
} else {
|
||||
f.bools[n] = flag.Bool(n, v.boolValue(), v.help)
|
||||
}
|
||||
case intKind:
|
||||
f.ints[n] = flag.Int(n, v.intValue(), v.help)
|
||||
case floatKind:
|
||||
f.floats[n] = flag.Float64(n, v.floatValue(), v.help)
|
||||
case stringKind:
|
||||
f.strings[n] = flag.String(n, v.value, v.help)
|
||||
}
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// updateFlags updates the pprof variables according to the flags
|
||||
// parsed in the command line.
|
||||
func updateFlags(f flagsInstalled) error {
|
||||
vars := pprofVariables
|
||||
groups := map[string]string{}
|
||||
for n, v := range f.bools {
|
||||
vars.set(n, fmt.Sprint(*v))
|
||||
if *v {
|
||||
g := vars[n].group
|
||||
if g != "" && groups[g] != "" {
|
||||
return fmt.Errorf("conflicting options %q and %q set", n, groups[g])
|
||||
}
|
||||
groups[g] = n
|
||||
}
|
||||
}
|
||||
for n, v := range f.ints {
|
||||
vars.set(n, fmt.Sprint(*v))
|
||||
}
|
||||
for n, v := range f.floats {
|
||||
vars.set(n, fmt.Sprint(*v))
|
||||
}
|
||||
for n, v := range f.strings {
|
||||
vars.set(n, *v)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type flagsInstalled struct {
|
||||
ints map[string]*int
|
||||
bools map[string]*bool
|
||||
floats map[string]*float64
|
||||
strings map[string]*string
|
||||
}
|
||||
|
||||
// isBuildID determines if the profile may contain a build ID, by
|
||||
// checking that it is a string of hex digits.
|
||||
func isBuildID(id string) bool {
|
||||
return strings.Trim(id, "0123456789abcdefABCDEF") == ""
|
||||
}
|
||||
|
||||
func sampleIndex(flag *bool, si string, sampleType, option string, ui plugin.UI) string {
|
||||
if *flag {
|
||||
if si == "" {
|
||||
return sampleType
|
||||
}
|
||||
ui.PrintErr("Multiple value selections, ignoring ", option)
|
||||
}
|
||||
return si
|
||||
}
|
||||
|
||||
func outputFormat(bcmd map[string]*bool, acmd map[string]*string) (cmd []string, err error) {
|
||||
for n, b := range bcmd {
|
||||
if *b {
|
||||
if cmd != nil {
|
||||
return nil, errors.New("must set at most one output format")
|
||||
}
|
||||
cmd = []string{n}
|
||||
}
|
||||
}
|
||||
for n, s := range acmd {
|
||||
if *s != "" {
|
||||
if cmd != nil {
|
||||
return nil, errors.New("must set at most one output format")
|
||||
}
|
||||
cmd = []string{n, *s}
|
||||
}
|
||||
}
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
var usageMsgHdr = `usage:
|
||||
|
||||
Produce output in the specified format.
|
||||
|
||||
pprof <format> [options] [binary] <source> ...
|
||||
|
||||
Omit the format to get an interactive shell whose commands can be used
|
||||
to generate various views of a profile
|
||||
|
||||
pprof [options] [binary] <source> ...
|
||||
|
||||
Omit the format and provide the "-http" flag to get an interactive web
|
||||
interface at the specified host:port that can be used to navigate through
|
||||
various views of a profile.
|
||||
|
||||
pprof -http [host]:[port] [options] [binary] <source> ...
|
||||
|
||||
Details:
|
||||
`
|
||||
|
||||
var usageMsgSrc = "\n\n" +
|
||||
" Source options:\n" +
|
||||
" -seconds Duration for time-based profile collection\n" +
|
||||
" -timeout Timeout in seconds for profile collection\n" +
|
||||
" -buildid Override build id for main binary\n" +
|
||||
" -add_comment Free-form annotation to add to the profile\n" +
|
||||
" Displayed on some reports or with pprof -comments\n" +
|
||||
" -diff_base source Source of base profile for comparison\n" +
|
||||
" -base source Source of base profile for profile subtraction\n" +
|
||||
" profile.pb.gz Profile in compressed protobuf format\n" +
|
||||
" legacy_profile Profile in legacy pprof format\n" +
|
||||
" http://host/profile URL for profile handler to retrieve\n" +
|
||||
" -symbolize= Controls source of symbol information\n" +
|
||||
" none Do not attempt symbolization\n" +
|
||||
" local Examine only local binaries\n" +
|
||||
" fastlocal Only get function names from local binaries\n" +
|
||||
" remote Do not examine local binaries\n" +
|
||||
" force Force re-symbolization\n" +
|
||||
" Binary Local path or build id of binary for symbolization\n"
|
||||
|
||||
var usageMsgVars = "\n\n" +
|
||||
" Misc options:\n" +
|
||||
" -http Provide web interface at host:port.\n" +
|
||||
" Host is optional and 'localhost' by default.\n" +
|
||||
" Port is optional and a randomly available port by default.\n" +
|
||||
" -no_browser Skip opening a browser for the interactive web UI.\n" +
|
||||
" -tools Search path for object tools\n" +
|
||||
"\n" +
|
||||
" Legacy convenience options:\n" +
|
||||
" -inuse_space Same as -sample_index=inuse_space\n" +
|
||||
" -inuse_objects Same as -sample_index=inuse_objects\n" +
|
||||
" -alloc_space Same as -sample_index=alloc_space\n" +
|
||||
" -alloc_objects Same as -sample_index=alloc_objects\n" +
|
||||
" -total_delay Same as -sample_index=delay\n" +
|
||||
" -contentions Same as -sample_index=contentions\n" +
|
||||
" -mean_delay Same as -mean -sample_index=delay\n" +
|
||||
"\n" +
|
||||
" Environment Variables:\n" +
|
||||
" PPROF_TMPDIR Location for saved profiles (default $HOME/pprof)\n" +
|
||||
" PPROF_TOOLS Search path for object-level tools\n" +
|
||||
" PPROF_BINARY_PATH Search path for local binary files\n" +
|
||||
" default: $HOME/pprof/binaries\n" +
|
||||
" searches $name, $path, $buildid/$name, $path/$buildid\n" +
|
||||
" * On Windows, %USERPROFILE% is used instead of $HOME"
|
566
vendor/github.com/google/pprof/internal/driver/commands.go
generated
vendored
Normal file
566
vendor/github.com/google/pprof/internal/driver/commands.go
generated
vendored
Normal file
@@ -0,0 +1,566 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/report"
|
||||
)
|
||||
|
||||
// commands describes the commands accepted by pprof.
|
||||
type commands map[string]*command
|
||||
|
||||
// command describes the actions for a pprof command. Includes a
|
||||
// function for command-line completion, the report format to use
|
||||
// during report generation, any postprocessing functions, and whether
|
||||
// the command expects a regexp parameter (typically a function name).
|
||||
type command struct {
|
||||
format int // report format to generate
|
||||
postProcess PostProcessor // postprocessing to run on report
|
||||
visualizer PostProcessor // display output using some callback
|
||||
hasParam bool // collect a parameter from the CLI
|
||||
description string // single-line description text saying what the command does
|
||||
usage string // multi-line help text saying how the command is used
|
||||
}
|
||||
|
||||
// help returns a help string for a command.
|
||||
func (c *command) help(name string) string {
|
||||
message := c.description + "\n"
|
||||
if c.usage != "" {
|
||||
message += " Usage:\n"
|
||||
lines := strings.Split(c.usage, "\n")
|
||||
for _, line := range lines {
|
||||
message += fmt.Sprintf(" %s\n", line)
|
||||
}
|
||||
}
|
||||
return message + "\n"
|
||||
}
|
||||
|
||||
// AddCommand adds an additional command to the set of commands
|
||||
// accepted by pprof. This enables extensions to add new commands for
|
||||
// specialized visualization formats. If the command specified already
|
||||
// exists, it is overwritten.
|
||||
func AddCommand(cmd string, format int, post PostProcessor, desc, usage string) {
|
||||
pprofCommands[cmd] = &command{format, post, nil, false, desc, usage}
|
||||
}
|
||||
|
||||
// SetVariableDefault sets the default value for a pprof
|
||||
// variable. This enables extensions to set their own defaults.
|
||||
func SetVariableDefault(variable, value string) {
|
||||
if v := pprofVariables[variable]; v != nil {
|
||||
v.value = value
|
||||
}
|
||||
}
|
||||
|
||||
// PostProcessor is a function that applies post-processing to the report output
|
||||
type PostProcessor func(input io.Reader, output io.Writer, ui plugin.UI) error
|
||||
|
||||
// interactiveMode is true if pprof is running on interactive mode, reading
|
||||
// commands from its shell.
|
||||
var interactiveMode = false
|
||||
|
||||
// pprofCommands are the report generation commands recognized by pprof.
|
||||
var pprofCommands = commands{
|
||||
// Commands that require no post-processing.
|
||||
"comments": {report.Comments, nil, nil, false, "Output all profile comments", ""},
|
||||
"disasm": {report.Dis, nil, nil, true, "Output assembly listings annotated with samples", listHelp("disasm", true)},
|
||||
"dot": {report.Dot, nil, nil, false, "Outputs a graph in DOT format", reportHelp("dot", false, true)},
|
||||
"list": {report.List, nil, nil, true, "Output annotated source for functions matching regexp", listHelp("list", false)},
|
||||
"peek": {report.Tree, nil, nil, true, "Output callers/callees of functions matching regexp", "peek func_regex\nDisplay callers and callees of functions matching func_regex."},
|
||||
"raw": {report.Raw, nil, nil, false, "Outputs a text representation of the raw profile", ""},
|
||||
"tags": {report.Tags, nil, nil, false, "Outputs all tags in the profile", "tags [tag_regex]* [-ignore_regex]* [>file]\nList tags with key:value matching tag_regex and exclude ignore_regex."},
|
||||
"text": {report.Text, nil, nil, false, "Outputs top entries in text form", reportHelp("text", true, true)},
|
||||
"top": {report.Text, nil, nil, false, "Outputs top entries in text form", reportHelp("top", true, true)},
|
||||
"traces": {report.Traces, nil, nil, false, "Outputs all profile samples in text form", ""},
|
||||
"tree": {report.Tree, nil, nil, false, "Outputs a text rendering of call graph", reportHelp("tree", true, true)},
|
||||
|
||||
// Save binary formats to a file
|
||||
"callgrind": {report.Callgrind, nil, awayFromTTY("callgraph.out"), false, "Outputs a graph in callgrind format", reportHelp("callgrind", false, true)},
|
||||
"proto": {report.Proto, nil, awayFromTTY("pb.gz"), false, "Outputs the profile in compressed protobuf format", ""},
|
||||
"topproto": {report.TopProto, nil, awayFromTTY("pb.gz"), false, "Outputs top entries in compressed protobuf format", ""},
|
||||
|
||||
// Generate report in DOT format and postprocess with dot
|
||||
"gif": {report.Dot, invokeDot("gif"), awayFromTTY("gif"), false, "Outputs a graph image in GIF format", reportHelp("gif", false, true)},
|
||||
"pdf": {report.Dot, invokeDot("pdf"), awayFromTTY("pdf"), false, "Outputs a graph in PDF format", reportHelp("pdf", false, true)},
|
||||
"png": {report.Dot, invokeDot("png"), awayFromTTY("png"), false, "Outputs a graph image in PNG format", reportHelp("png", false, true)},
|
||||
"ps": {report.Dot, invokeDot("ps"), awayFromTTY("ps"), false, "Outputs a graph in PS format", reportHelp("ps", false, true)},
|
||||
|
||||
// Save SVG output into a file
|
||||
"svg": {report.Dot, massageDotSVG(), awayFromTTY("svg"), false, "Outputs a graph in SVG format", reportHelp("svg", false, true)},
|
||||
|
||||
// Visualize postprocessed dot output
|
||||
"eog": {report.Dot, invokeDot("svg"), invokeVisualizer("svg", []string{"eog"}), false, "Visualize graph through eog", reportHelp("eog", false, false)},
|
||||
"evince": {report.Dot, invokeDot("pdf"), invokeVisualizer("pdf", []string{"evince"}), false, "Visualize graph through evince", reportHelp("evince", false, false)},
|
||||
"gv": {report.Dot, invokeDot("ps"), invokeVisualizer("ps", []string{"gv --noantialias"}), false, "Visualize graph through gv", reportHelp("gv", false, false)},
|
||||
"web": {report.Dot, massageDotSVG(), invokeVisualizer("svg", browsers()), false, "Visualize graph through web browser", reportHelp("web", false, false)},
|
||||
|
||||
// Visualize callgrind output
|
||||
"kcachegrind": {report.Callgrind, nil, invokeVisualizer("grind", kcachegrind), false, "Visualize report in KCachegrind", reportHelp("kcachegrind", false, false)},
|
||||
|
||||
// Visualize HTML directly generated by report.
|
||||
"weblist": {report.WebList, nil, invokeVisualizer("html", browsers()), true, "Display annotated source in a web browser", listHelp("weblist", false)},
|
||||
}
|
||||
|
||||
// pprofVariables are the configuration parameters that affect the
|
||||
// reported generated by pprof.
|
||||
var pprofVariables = variables{
|
||||
// Filename for file-based output formats, stdout by default.
|
||||
"output": &variable{stringKind, "", "", helpText("Output filename for file-based outputs")},
|
||||
|
||||
// Comparisons.
|
||||
"drop_negative": &variable{boolKind, "f", "", helpText(
|
||||
"Ignore negative differences",
|
||||
"Do not show any locations with values <0.")},
|
||||
|
||||
// Graph handling options.
|
||||
"call_tree": &variable{boolKind, "f", "", helpText(
|
||||
"Create a context-sensitive call tree",
|
||||
"Treat locations reached through different paths as separate.")},
|
||||
|
||||
// Display options.
|
||||
"relative_percentages": &variable{boolKind, "f", "", helpText(
|
||||
"Show percentages relative to focused subgraph",
|
||||
"If unset, percentages are relative to full graph before focusing",
|
||||
"to facilitate comparison with original graph.")},
|
||||
"unit": &variable{stringKind, "minimum", "", helpText(
|
||||
"Measurement units to display",
|
||||
"Scale the sample values to this unit.",
|
||||
"For time-based profiles, use seconds, milliseconds, nanoseconds, etc.",
|
||||
"For memory profiles, use megabytes, kilobytes, bytes, etc.",
|
||||
"Using auto will scale each value independently to the most natural unit.")},
|
||||
"compact_labels": &variable{boolKind, "f", "", "Show minimal headers"},
|
||||
"source_path": &variable{stringKind, "", "", "Search path for source files"},
|
||||
"trim_path": &variable{stringKind, "", "", "Path to trim from source paths before search"},
|
||||
|
||||
// Filtering options
|
||||
"nodecount": &variable{intKind, "-1", "", helpText(
|
||||
"Max number of nodes to show",
|
||||
"Uses heuristics to limit the number of locations to be displayed.",
|
||||
"On graphs, dotted edges represent paths through nodes that have been removed.")},
|
||||
"nodefraction": &variable{floatKind, "0.005", "", "Hide nodes below <f>*total"},
|
||||
"edgefraction": &variable{floatKind, "0.001", "", "Hide edges below <f>*total"},
|
||||
"trim": &variable{boolKind, "t", "", helpText(
|
||||
"Honor nodefraction/edgefraction/nodecount defaults",
|
||||
"Set to false to get the full profile, without any trimming.")},
|
||||
"focus": &variable{stringKind, "", "", helpText(
|
||||
"Restricts to samples going through a node matching regexp",
|
||||
"Discard samples that do not include a node matching this regexp.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"ignore": &variable{stringKind, "", "", helpText(
|
||||
"Skips paths going through any nodes matching regexp",
|
||||
"If set, discard samples that include a node matching this regexp.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"prune_from": &variable{stringKind, "", "", helpText(
|
||||
"Drops any functions below the matched frame.",
|
||||
"If set, any frames matching the specified regexp and any frames",
|
||||
"below it will be dropped from each sample.")},
|
||||
"hide": &variable{stringKind, "", "", helpText(
|
||||
"Skips nodes matching regexp",
|
||||
"Discard nodes that match this location.",
|
||||
"Other nodes from samples that include this location will be shown.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"show": &variable{stringKind, "", "", helpText(
|
||||
"Only show nodes matching regexp",
|
||||
"If set, only show nodes that match this location.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"show_from": &variable{stringKind, "", "", helpText(
|
||||
"Drops functions above the highest matched frame.",
|
||||
"If set, all frames above the highest match are dropped from every sample.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"tagfocus": &variable{stringKind, "", "", helpText(
|
||||
"Restricts to samples with tags in range or matched by regexp",
|
||||
"Use name=value syntax to limit the matching to a specific tag.",
|
||||
"Numeric tag filter examples: 1kb, 1kb:10kb, memory=32mb:",
|
||||
"String tag filter examples: foo, foo.*bar, mytag=foo.*bar")},
|
||||
"tagignore": &variable{stringKind, "", "", helpText(
|
||||
"Discard samples with tags in range or matched by regexp",
|
||||
"Use name=value syntax to limit the matching to a specific tag.",
|
||||
"Numeric tag filter examples: 1kb, 1kb:10kb, memory=32mb:",
|
||||
"String tag filter examples: foo, foo.*bar, mytag=foo.*bar")},
|
||||
"tagshow": &variable{stringKind, "", "", helpText(
|
||||
"Only consider tags matching this regexp",
|
||||
"Discard tags that do not match this regexp")},
|
||||
"taghide": &variable{stringKind, "", "", helpText(
|
||||
"Skip tags matching this regexp",
|
||||
"Discard tags that match this regexp")},
|
||||
// Heap profile options
|
||||
"divide_by": &variable{floatKind, "1", "", helpText(
|
||||
"Ratio to divide all samples before visualization",
|
||||
"Divide all samples values by a constant, eg the number of processors or jobs.")},
|
||||
"mean": &variable{boolKind, "f", "", helpText(
|
||||
"Average sample value over first value (count)",
|
||||
"For memory profiles, report average memory per allocation.",
|
||||
"For time-based profiles, report average time per event.")},
|
||||
"sample_index": &variable{stringKind, "", "", helpText(
|
||||
"Sample value to report (0-based index or name)",
|
||||
"Profiles contain multiple values per sample.",
|
||||
"Use sample_index=i to select the ith value (starting at 0).")},
|
||||
"normalize": &variable{boolKind, "f", "", helpText(
|
||||
"Scales profile based on the base profile.")},
|
||||
|
||||
// Data sorting criteria
|
||||
"flat": &variable{boolKind, "t", "cumulative", helpText("Sort entries based on own weight")},
|
||||
"cum": &variable{boolKind, "f", "cumulative", helpText("Sort entries based on cumulative weight")},
|
||||
|
||||
// Output granularity
|
||||
"functions": &variable{boolKind, "t", "granularity", helpText(
|
||||
"Aggregate at the function level.",
|
||||
"Ignores the filename where the function was defined.")},
|
||||
"filefunctions": &variable{boolKind, "t", "granularity", helpText(
|
||||
"Aggregate at the function level.",
|
||||
"Takes into account the filename where the function was defined.")},
|
||||
"files": &variable{boolKind, "f", "granularity", "Aggregate at the file level."},
|
||||
"lines": &variable{boolKind, "f", "granularity", "Aggregate at the source code line level."},
|
||||
"addresses": &variable{boolKind, "f", "granularity", helpText(
|
||||
"Aggregate at the address level.",
|
||||
"Includes functions' addresses in the output.")},
|
||||
"noinlines": &variable{boolKind, "f", "", helpText(
|
||||
"Ignore inlines.",
|
||||
"Attributes inlined functions to their first out-of-line caller.")},
|
||||
}
|
||||
|
||||
func helpText(s ...string) string {
|
||||
return strings.Join(s, "\n") + "\n"
|
||||
}
|
||||
|
||||
// usage returns a string describing the pprof commands and variables.
|
||||
// if commandLine is set, the output reflect cli usage.
|
||||
func usage(commandLine bool) string {
|
||||
var prefix string
|
||||
if commandLine {
|
||||
prefix = "-"
|
||||
}
|
||||
fmtHelp := func(c, d string) string {
|
||||
return fmt.Sprintf(" %-16s %s", c, strings.SplitN(d, "\n", 2)[0])
|
||||
}
|
||||
|
||||
var commands []string
|
||||
for name, cmd := range pprofCommands {
|
||||
commands = append(commands, fmtHelp(prefix+name, cmd.description))
|
||||
}
|
||||
sort.Strings(commands)
|
||||
|
||||
var help string
|
||||
if commandLine {
|
||||
help = " Output formats (select at most one):\n"
|
||||
} else {
|
||||
help = " Commands:\n"
|
||||
commands = append(commands, fmtHelp("o/options", "List options and their current values"))
|
||||
commands = append(commands, fmtHelp("quit/exit/^D", "Exit pprof"))
|
||||
}
|
||||
|
||||
help = help + strings.Join(commands, "\n") + "\n\n" +
|
||||
" Options:\n"
|
||||
|
||||
// Print help for variables after sorting them.
|
||||
// Collect radio variables by their group name to print them together.
|
||||
radioOptions := make(map[string][]string)
|
||||
var variables []string
|
||||
for name, vr := range pprofVariables {
|
||||
if vr.group != "" {
|
||||
radioOptions[vr.group] = append(radioOptions[vr.group], name)
|
||||
continue
|
||||
}
|
||||
variables = append(variables, fmtHelp(prefix+name, vr.help))
|
||||
}
|
||||
sort.Strings(variables)
|
||||
|
||||
help = help + strings.Join(variables, "\n") + "\n\n" +
|
||||
" Option groups (only set one per group):\n"
|
||||
|
||||
var radioStrings []string
|
||||
for radio, ops := range radioOptions {
|
||||
sort.Strings(ops)
|
||||
s := []string{fmtHelp(radio, "")}
|
||||
for _, op := range ops {
|
||||
s = append(s, " "+fmtHelp(prefix+op, pprofVariables[op].help))
|
||||
}
|
||||
|
||||
radioStrings = append(radioStrings, strings.Join(s, "\n"))
|
||||
}
|
||||
sort.Strings(radioStrings)
|
||||
return help + strings.Join(radioStrings, "\n")
|
||||
}
|
||||
|
||||
func reportHelp(c string, cum, redirect bool) string {
|
||||
h := []string{
|
||||
c + " [n] [focus_regex]* [-ignore_regex]*",
|
||||
"Include up to n samples",
|
||||
"Include samples matching focus_regex, and exclude ignore_regex.",
|
||||
}
|
||||
if cum {
|
||||
h[0] += " [-cum]"
|
||||
h = append(h, "-cum sorts the output by cumulative weight")
|
||||
}
|
||||
if redirect {
|
||||
h[0] += " >f"
|
||||
h = append(h, "Optionally save the report on the file f")
|
||||
}
|
||||
return strings.Join(h, "\n")
|
||||
}
|
||||
|
||||
func listHelp(c string, redirect bool) string {
|
||||
h := []string{
|
||||
c + "<func_regex|address> [-focus_regex]* [-ignore_regex]*",
|
||||
"Include functions matching func_regex, or including the address specified.",
|
||||
"Include samples matching focus_regex, and exclude ignore_regex.",
|
||||
}
|
||||
if redirect {
|
||||
h[0] += " >f"
|
||||
h = append(h, "Optionally save the report on the file f")
|
||||
}
|
||||
return strings.Join(h, "\n")
|
||||
}
|
||||
|
||||
// browsers returns a list of commands to attempt for web visualization.
|
||||
func browsers() []string {
|
||||
var cmds []string
|
||||
if userBrowser := os.Getenv("BROWSER"); userBrowser != "" {
|
||||
cmds = append(cmds, userBrowser)
|
||||
}
|
||||
switch runtime.GOOS {
|
||||
case "darwin":
|
||||
cmds = append(cmds, "/usr/bin/open")
|
||||
case "windows":
|
||||
cmds = append(cmds, "cmd /c start")
|
||||
default:
|
||||
// Commands opening browsers are prioritized over xdg-open, so browser()
|
||||
// command can be used on linux to open the .svg file generated by the -web
|
||||
// command (the .svg file includes embedded javascript so is best viewed in
|
||||
// a browser).
|
||||
cmds = append(cmds, []string{"chrome", "google-chrome", "chromium", "firefox", "sensible-browser"}...)
|
||||
if os.Getenv("DISPLAY") != "" {
|
||||
// xdg-open is only for use in a desktop environment.
|
||||
cmds = append(cmds, "xdg-open")
|
||||
}
|
||||
}
|
||||
return cmds
|
||||
}
|
||||
|
||||
var kcachegrind = []string{"kcachegrind"}
|
||||
|
||||
// awayFromTTY saves the output in a file if it would otherwise go to
|
||||
// the terminal screen. This is used to avoid dumping binary data on
|
||||
// the screen.
|
||||
func awayFromTTY(format string) PostProcessor {
|
||||
return func(input io.Reader, output io.Writer, ui plugin.UI) error {
|
||||
if output == os.Stdout && (ui.IsTerminal() || interactiveMode) {
|
||||
tempFile, err := newTempFile("", "profile", "."+format)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ui.PrintErr("Generating report in ", tempFile.Name())
|
||||
output = tempFile
|
||||
}
|
||||
_, err := io.Copy(output, input)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func invokeDot(format string) PostProcessor {
|
||||
return func(input io.Reader, output io.Writer, ui plugin.UI) error {
|
||||
cmd := exec.Command("dot", "-T"+format)
|
||||
cmd.Stdin, cmd.Stdout, cmd.Stderr = input, output, os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("failed to execute dot. Is Graphviz installed? Error: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// massageDotSVG invokes the dot tool to generate an SVG image and alters
|
||||
// the image to have panning capabilities when viewed in a browser.
|
||||
func massageDotSVG() PostProcessor {
|
||||
generateSVG := invokeDot("svg")
|
||||
return func(input io.Reader, output io.Writer, ui plugin.UI) error {
|
||||
baseSVG := new(bytes.Buffer)
|
||||
if err := generateSVG(input, baseSVG, ui); err != nil {
|
||||
return err
|
||||
}
|
||||
_, err := output.Write([]byte(massageSVG(baseSVG.String())))
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func invokeVisualizer(suffix string, visualizers []string) PostProcessor {
|
||||
return func(input io.Reader, output io.Writer, ui plugin.UI) error {
|
||||
tempFile, err := newTempFile(os.TempDir(), "pprof", "."+suffix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
deferDeleteTempFile(tempFile.Name())
|
||||
if _, err := io.Copy(tempFile, input); err != nil {
|
||||
return err
|
||||
}
|
||||
tempFile.Close()
|
||||
// Try visualizers until one is successful
|
||||
for _, v := range visualizers {
|
||||
// Separate command and arguments for exec.Command.
|
||||
args := strings.Split(v, " ")
|
||||
if len(args) == 0 {
|
||||
continue
|
||||
}
|
||||
viewer := exec.Command(args[0], append(args[1:], tempFile.Name())...)
|
||||
viewer.Stderr = os.Stderr
|
||||
if err = viewer.Start(); err == nil {
|
||||
// Wait for a second so that the visualizer has a chance to
|
||||
// open the input file. This needs to be done even if we're
|
||||
// waiting for the visualizer as it can be just a wrapper that
|
||||
// spawns a browser tab and returns right away.
|
||||
defer func(t <-chan time.Time) {
|
||||
<-t
|
||||
}(time.After(time.Second))
|
||||
// On interactive mode, let the visualizer run in the background
|
||||
// so other commands can be issued.
|
||||
if !interactiveMode {
|
||||
return viewer.Wait()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// variables describe the configuration parameters recognized by pprof.
|
||||
type variables map[string]*variable
|
||||
|
||||
// variable is a single configuration parameter.
|
||||
type variable struct {
|
||||
kind int // How to interpret the value, must be one of the enums below.
|
||||
value string // Effective value. Only values appropriate for the Kind should be set.
|
||||
group string // boolKind variables with the same Group != "" cannot be set simultaneously.
|
||||
help string // Text describing the variable, in multiple lines separated by newline.
|
||||
}
|
||||
|
||||
const (
|
||||
// variable.kind must be one of these variables.
|
||||
boolKind = iota
|
||||
intKind
|
||||
floatKind
|
||||
stringKind
|
||||
)
|
||||
|
||||
// set updates the value of a variable, checking that the value is
|
||||
// suitable for the variable Kind.
|
||||
func (vars variables) set(name, value string) error {
|
||||
v := vars[name]
|
||||
if v == nil {
|
||||
return fmt.Errorf("no variable %s", name)
|
||||
}
|
||||
var err error
|
||||
switch v.kind {
|
||||
case boolKind:
|
||||
var b bool
|
||||
if b, err = stringToBool(value); err == nil {
|
||||
if v.group != "" && !b {
|
||||
err = fmt.Errorf("%q can only be set to true", name)
|
||||
}
|
||||
}
|
||||
case intKind:
|
||||
_, err = strconv.Atoi(value)
|
||||
case floatKind:
|
||||
_, err = strconv.ParseFloat(value, 64)
|
||||
case stringKind:
|
||||
// Remove quotes, particularly useful for empty values.
|
||||
if len(value) > 1 && strings.HasPrefix(value, `"`) && strings.HasSuffix(value, `"`) {
|
||||
value = value[1 : len(value)-1]
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
vars[name].value = value
|
||||
if group := vars[name].group; group != "" {
|
||||
for vname, vvar := range vars {
|
||||
if vvar.group == group && vname != name {
|
||||
vvar.value = "f"
|
||||
}
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// boolValue returns the value of a boolean variable.
|
||||
func (v *variable) boolValue() bool {
|
||||
b, err := stringToBool(v.value)
|
||||
if err != nil {
|
||||
panic("unexpected value " + v.value + " for bool ")
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// intValue returns the value of an intKind variable.
|
||||
func (v *variable) intValue() int {
|
||||
i, err := strconv.Atoi(v.value)
|
||||
if err != nil {
|
||||
panic("unexpected value " + v.value + " for int ")
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
// floatValue returns the value of a Float variable.
|
||||
func (v *variable) floatValue() float64 {
|
||||
f, err := strconv.ParseFloat(v.value, 64)
|
||||
if err != nil {
|
||||
panic("unexpected value " + v.value + " for float ")
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// stringValue returns a canonical representation for a variable.
|
||||
func (v *variable) stringValue() string {
|
||||
switch v.kind {
|
||||
case boolKind:
|
||||
return fmt.Sprint(v.boolValue())
|
||||
case intKind:
|
||||
return fmt.Sprint(v.intValue())
|
||||
case floatKind:
|
||||
return fmt.Sprint(v.floatValue())
|
||||
}
|
||||
return v.value
|
||||
}
|
||||
|
||||
func stringToBool(s string) (bool, error) {
|
||||
switch strings.ToLower(s) {
|
||||
case "true", "t", "yes", "y", "1", "":
|
||||
return true, nil
|
||||
case "false", "f", "no", "n", "0":
|
||||
return false, nil
|
||||
default:
|
||||
return false, fmt.Errorf(`illegal value "%s" for bool variable`, s)
|
||||
}
|
||||
}
|
||||
|
||||
// makeCopy returns a duplicate of a set of shell variables.
|
||||
func (vars variables) makeCopy() variables {
|
||||
varscopy := make(variables, len(vars))
|
||||
for n, v := range vars {
|
||||
vcopy := *v
|
||||
varscopy[n] = &vcopy
|
||||
}
|
||||
return varscopy
|
||||
}
|
330
vendor/github.com/google/pprof/internal/driver/driver.go
generated
vendored
Normal file
330
vendor/github.com/google/pprof/internal/driver/driver.go
generated
vendored
Normal file
@@ -0,0 +1,330 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package driver implements the core pprof functionality. It can be
|
||||
// parameterized with a flag implementation, fetch and symbolize
|
||||
// mechanisms.
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/report"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
// PProf acquires a profile, and symbolizes it using a profile
|
||||
// manager. Then it generates a report formatted according to the
|
||||
// options selected through the flags package.
|
||||
func PProf(eo *plugin.Options) error {
|
||||
// Remove any temporary files created during pprof processing.
|
||||
defer cleanupTempFiles()
|
||||
|
||||
o := setDefaults(eo)
|
||||
|
||||
src, cmd, err := parseFlags(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p, err := fetchProfiles(src, o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if cmd != nil {
|
||||
return generateReport(p, cmd, pprofVariables, o)
|
||||
}
|
||||
|
||||
if src.HTTPHostport != "" {
|
||||
return serveWebInterface(src.HTTPHostport, p, o, src.HTTPDisableBrowser)
|
||||
}
|
||||
return interactive(p, o)
|
||||
}
|
||||
|
||||
func generateRawReport(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) (*command, *report.Report, error) {
|
||||
p = p.Copy() // Prevent modification to the incoming profile.
|
||||
|
||||
// Identify units of numeric tags in profile.
|
||||
numLabelUnits := identifyNumLabelUnits(p, o.UI)
|
||||
|
||||
// Get report output format
|
||||
c := pprofCommands[cmd[0]]
|
||||
if c == nil {
|
||||
panic("unexpected nil command")
|
||||
}
|
||||
|
||||
vars = applyCommandOverrides(cmd[0], c.format, vars)
|
||||
|
||||
// Delay focus after configuring report to get percentages on all samples.
|
||||
relative := vars["relative_percentages"].boolValue()
|
||||
if relative {
|
||||
if err := applyFocus(p, numLabelUnits, vars, o.UI); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
ropt, err := reportOptions(p, numLabelUnits, vars)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
ropt.OutputFormat = c.format
|
||||
if len(cmd) == 2 {
|
||||
s, err := regexp.Compile(cmd[1])
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("parsing argument regexp %s: %v", cmd[1], err)
|
||||
}
|
||||
ropt.Symbol = s
|
||||
}
|
||||
|
||||
rpt := report.New(p, ropt)
|
||||
if !relative {
|
||||
if err := applyFocus(p, numLabelUnits, vars, o.UI); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
if err := aggregate(p, vars); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return c, rpt, nil
|
||||
}
|
||||
|
||||
func generateReport(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) error {
|
||||
c, rpt, err := generateRawReport(p, cmd, vars, o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Generate the report.
|
||||
dst := new(bytes.Buffer)
|
||||
if err := report.Generate(dst, rpt, o.Obj); err != nil {
|
||||
return err
|
||||
}
|
||||
src := dst
|
||||
|
||||
// If necessary, perform any data post-processing.
|
||||
if c.postProcess != nil {
|
||||
dst = new(bytes.Buffer)
|
||||
if err := c.postProcess(src, dst, o.UI); err != nil {
|
||||
return err
|
||||
}
|
||||
src = dst
|
||||
}
|
||||
|
||||
// If no output is specified, use default visualizer.
|
||||
output := vars["output"].value
|
||||
if output == "" {
|
||||
if c.visualizer != nil {
|
||||
return c.visualizer(src, os.Stdout, o.UI)
|
||||
}
|
||||
_, err := src.WriteTo(os.Stdout)
|
||||
return err
|
||||
}
|
||||
|
||||
// Output to specified file.
|
||||
o.UI.PrintErr("Generating report in ", output)
|
||||
out, err := o.Writer.Open(output)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := src.WriteTo(out); err != nil {
|
||||
out.Close()
|
||||
return err
|
||||
}
|
||||
return out.Close()
|
||||
}
|
||||
|
||||
func applyCommandOverrides(cmd string, outputFormat int, v variables) variables {
|
||||
// Some report types override the trim flag to false below. This is to make
|
||||
// sure the default heuristics of excluding insignificant nodes and edges
|
||||
// from the call graph do not apply. One example where it is important is
|
||||
// annotated source or disassembly listing. Those reports run on a specific
|
||||
// function (or functions), but the trimming is applied before the function
|
||||
// data is selected. So, with trimming enabled, the report could end up
|
||||
// showing no data if the specified function is "uninteresting" as far as the
|
||||
// trimming is concerned.
|
||||
trim := v["trim"].boolValue()
|
||||
|
||||
switch cmd {
|
||||
case "disasm", "weblist":
|
||||
trim = false
|
||||
v.set("addresses", "t")
|
||||
// Force the 'noinlines' mode so that source locations for a given address
|
||||
// collapse and there is only one for the given address. Without this
|
||||
// cumulative metrics would be double-counted when annotating the assembly.
|
||||
// This is because the merge is done by address and in case of an inlined
|
||||
// stack each of the inlined entries is a separate callgraph node.
|
||||
v.set("noinlines", "t")
|
||||
case "peek":
|
||||
trim = false
|
||||
case "list":
|
||||
trim = false
|
||||
v.set("lines", "t")
|
||||
// Do not force 'noinlines' to be false so that specifying
|
||||
// "-list foo -noinlines" is supported and works as expected.
|
||||
case "text", "top", "topproto":
|
||||
if v["nodecount"].intValue() == -1 {
|
||||
v.set("nodecount", "0")
|
||||
}
|
||||
default:
|
||||
if v["nodecount"].intValue() == -1 {
|
||||
v.set("nodecount", "80")
|
||||
}
|
||||
}
|
||||
|
||||
switch outputFormat {
|
||||
case report.Proto, report.Raw, report.Callgrind:
|
||||
trim = false
|
||||
v.set("addresses", "t")
|
||||
v.set("noinlines", "f")
|
||||
}
|
||||
|
||||
if !trim {
|
||||
v.set("nodecount", "0")
|
||||
v.set("nodefraction", "0")
|
||||
v.set("edgefraction", "0")
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func aggregate(prof *profile.Profile, v variables) error {
|
||||
var function, filename, linenumber, address bool
|
||||
inlines := !v["noinlines"].boolValue()
|
||||
switch {
|
||||
case v["addresses"].boolValue():
|
||||
if inlines {
|
||||
return nil
|
||||
}
|
||||
function = true
|
||||
filename = true
|
||||
linenumber = true
|
||||
address = true
|
||||
case v["lines"].boolValue():
|
||||
function = true
|
||||
filename = true
|
||||
linenumber = true
|
||||
case v["files"].boolValue():
|
||||
filename = true
|
||||
case v["functions"].boolValue():
|
||||
function = true
|
||||
case v["filefunctions"].boolValue():
|
||||
function = true
|
||||
filename = true
|
||||
default:
|
||||
return fmt.Errorf("unexpected granularity")
|
||||
}
|
||||
return prof.Aggregate(inlines, function, filename, linenumber, address)
|
||||
}
|
||||
|
||||
func reportOptions(p *profile.Profile, numLabelUnits map[string]string, vars variables) (*report.Options, error) {
|
||||
si, mean := vars["sample_index"].value, vars["mean"].boolValue()
|
||||
value, meanDiv, sample, err := sampleFormat(p, si, mean)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
stype := sample.Type
|
||||
if mean {
|
||||
stype = "mean_" + stype
|
||||
}
|
||||
|
||||
if vars["divide_by"].floatValue() == 0 {
|
||||
return nil, fmt.Errorf("zero divisor specified")
|
||||
}
|
||||
|
||||
var filters []string
|
||||
for _, k := range []string{"focus", "ignore", "hide", "show", "show_from", "tagfocus", "tagignore", "tagshow", "taghide"} {
|
||||
v := vars[k].value
|
||||
if v != "" {
|
||||
filters = append(filters, k+"="+v)
|
||||
}
|
||||
}
|
||||
|
||||
ropt := &report.Options{
|
||||
CumSort: vars["cum"].boolValue(),
|
||||
CallTree: vars["call_tree"].boolValue(),
|
||||
DropNegative: vars["drop_negative"].boolValue(),
|
||||
|
||||
CompactLabels: vars["compact_labels"].boolValue(),
|
||||
Ratio: 1 / vars["divide_by"].floatValue(),
|
||||
|
||||
NodeCount: vars["nodecount"].intValue(),
|
||||
NodeFraction: vars["nodefraction"].floatValue(),
|
||||
EdgeFraction: vars["edgefraction"].floatValue(),
|
||||
|
||||
ActiveFilters: filters,
|
||||
NumLabelUnits: numLabelUnits,
|
||||
|
||||
SampleValue: value,
|
||||
SampleMeanDivisor: meanDiv,
|
||||
SampleType: stype,
|
||||
SampleUnit: sample.Unit,
|
||||
|
||||
OutputUnit: vars["unit"].value,
|
||||
|
||||
SourcePath: vars["source_path"].stringValue(),
|
||||
TrimPath: vars["trim_path"].stringValue(),
|
||||
}
|
||||
|
||||
if len(p.Mapping) > 0 && p.Mapping[0].File != "" {
|
||||
ropt.Title = filepath.Base(p.Mapping[0].File)
|
||||
}
|
||||
|
||||
return ropt, nil
|
||||
}
|
||||
|
||||
// identifyNumLabelUnits returns a map of numeric label keys to the units
|
||||
// associated with those keys.
|
||||
func identifyNumLabelUnits(p *profile.Profile, ui plugin.UI) map[string]string {
|
||||
numLabelUnits, ignoredUnits := p.NumLabelUnits()
|
||||
|
||||
// Print errors for tags with multiple units associated with
|
||||
// a single key.
|
||||
for k, units := range ignoredUnits {
|
||||
ui.PrintErr(fmt.Sprintf("For tag %s used unit %s, also encountered unit(s) %s", k, numLabelUnits[k], strings.Join(units, ", ")))
|
||||
}
|
||||
return numLabelUnits
|
||||
}
|
||||
|
||||
type sampleValueFunc func([]int64) int64
|
||||
|
||||
// sampleFormat returns a function to extract values out of a profile.Sample,
|
||||
// and the type/units of those values.
|
||||
func sampleFormat(p *profile.Profile, sampleIndex string, mean bool) (value, meanDiv sampleValueFunc, v *profile.ValueType, err error) {
|
||||
if len(p.SampleType) == 0 {
|
||||
return nil, nil, nil, fmt.Errorf("profile has no samples")
|
||||
}
|
||||
index, err := p.SampleIndexByName(sampleIndex)
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
value = valueExtractor(index)
|
||||
if mean {
|
||||
meanDiv = valueExtractor(0)
|
||||
}
|
||||
v = p.SampleType[index]
|
||||
return
|
||||
}
|
||||
|
||||
func valueExtractor(ix int) sampleValueFunc {
|
||||
return func(v []int64) int64 {
|
||||
return v[ix]
|
||||
}
|
||||
}
|
219
vendor/github.com/google/pprof/internal/driver/driver_focus.go
generated
vendored
Normal file
219
vendor/github.com/google/pprof/internal/driver/driver_focus.go
generated
vendored
Normal file
@@ -0,0 +1,219 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/measurement"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
var tagFilterRangeRx = regexp.MustCompile("([+-]?[[:digit:]]+)([[:alpha:]]+)")
|
||||
|
||||
// applyFocus filters samples based on the focus/ignore options
|
||||
func applyFocus(prof *profile.Profile, numLabelUnits map[string]string, v variables, ui plugin.UI) error {
|
||||
focus, err := compileRegexOption("focus", v["focus"].value, nil)
|
||||
ignore, err := compileRegexOption("ignore", v["ignore"].value, err)
|
||||
hide, err := compileRegexOption("hide", v["hide"].value, err)
|
||||
show, err := compileRegexOption("show", v["show"].value, err)
|
||||
showfrom, err := compileRegexOption("show_from", v["show_from"].value, err)
|
||||
tagfocus, err := compileTagFilter("tagfocus", v["tagfocus"].value, numLabelUnits, ui, err)
|
||||
tagignore, err := compileTagFilter("tagignore", v["tagignore"].value, numLabelUnits, ui, err)
|
||||
prunefrom, err := compileRegexOption("prune_from", v["prune_from"].value, err)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fm, im, hm, hnm := prof.FilterSamplesByName(focus, ignore, hide, show)
|
||||
warnNoMatches(focus == nil || fm, "Focus", ui)
|
||||
warnNoMatches(ignore == nil || im, "Ignore", ui)
|
||||
warnNoMatches(hide == nil || hm, "Hide", ui)
|
||||
warnNoMatches(show == nil || hnm, "Show", ui)
|
||||
|
||||
sfm := prof.ShowFrom(showfrom)
|
||||
warnNoMatches(showfrom == nil || sfm, "ShowFrom", ui)
|
||||
|
||||
tfm, tim := prof.FilterSamplesByTag(tagfocus, tagignore)
|
||||
warnNoMatches(tagfocus == nil || tfm, "TagFocus", ui)
|
||||
warnNoMatches(tagignore == nil || tim, "TagIgnore", ui)
|
||||
|
||||
tagshow, err := compileRegexOption("tagshow", v["tagshow"].value, err)
|
||||
taghide, err := compileRegexOption("taghide", v["taghide"].value, err)
|
||||
tns, tnh := prof.FilterTagsByName(tagshow, taghide)
|
||||
warnNoMatches(tagshow == nil || tns, "TagShow", ui)
|
||||
warnNoMatches(tagignore == nil || tnh, "TagHide", ui)
|
||||
|
||||
if prunefrom != nil {
|
||||
prof.PruneFrom(prunefrom)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func compileRegexOption(name, value string, err error) (*regexp.Regexp, error) {
|
||||
if value == "" || err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rx, err := regexp.Compile(value)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing %s regexp: %v", name, err)
|
||||
}
|
||||
return rx, nil
|
||||
}
|
||||
|
||||
func compileTagFilter(name, value string, numLabelUnits map[string]string, ui plugin.UI, err error) (func(*profile.Sample) bool, error) {
|
||||
if value == "" || err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tagValuePair := strings.SplitN(value, "=", 2)
|
||||
var wantKey string
|
||||
if len(tagValuePair) == 2 {
|
||||
wantKey = tagValuePair[0]
|
||||
value = tagValuePair[1]
|
||||
}
|
||||
|
||||
if numFilter := parseTagFilterRange(value); numFilter != nil {
|
||||
ui.PrintErr(name, ":Interpreted '", value, "' as range, not regexp")
|
||||
labelFilter := func(vals []int64, unit string) bool {
|
||||
for _, val := range vals {
|
||||
if numFilter(val, unit) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
numLabelUnit := func(key string) string {
|
||||
return numLabelUnits[key]
|
||||
}
|
||||
if wantKey == "" {
|
||||
return func(s *profile.Sample) bool {
|
||||
for key, vals := range s.NumLabel {
|
||||
if labelFilter(vals, numLabelUnit(key)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
return func(s *profile.Sample) bool {
|
||||
if vals, ok := s.NumLabel[wantKey]; ok {
|
||||
return labelFilter(vals, numLabelUnit(wantKey))
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
|
||||
var rfx []*regexp.Regexp
|
||||
for _, tagf := range strings.Split(value, ",") {
|
||||
fx, err := regexp.Compile(tagf)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing %s regexp: %v", name, err)
|
||||
}
|
||||
rfx = append(rfx, fx)
|
||||
}
|
||||
if wantKey == "" {
|
||||
return func(s *profile.Sample) bool {
|
||||
matchedrx:
|
||||
for _, rx := range rfx {
|
||||
for key, vals := range s.Label {
|
||||
for _, val := range vals {
|
||||
// TODO: Match against val, not key:val in future
|
||||
if rx.MatchString(key + ":" + val) {
|
||||
continue matchedrx
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}, nil
|
||||
}
|
||||
return func(s *profile.Sample) bool {
|
||||
if vals, ok := s.Label[wantKey]; ok {
|
||||
for _, rx := range rfx {
|
||||
for _, val := range vals {
|
||||
if rx.MatchString(val) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
|
||||
// parseTagFilterRange returns a function to checks if a value is
|
||||
// contained on the range described by a string. It can recognize
|
||||
// strings of the form:
|
||||
// "32kb" -- matches values == 32kb
|
||||
// ":64kb" -- matches values <= 64kb
|
||||
// "4mb:" -- matches values >= 4mb
|
||||
// "12kb:64mb" -- matches values between 12kb and 64mb (both included).
|
||||
func parseTagFilterRange(filter string) func(int64, string) bool {
|
||||
ranges := tagFilterRangeRx.FindAllStringSubmatch(filter, 2)
|
||||
if len(ranges) == 0 {
|
||||
return nil // No ranges were identified
|
||||
}
|
||||
v, err := strconv.ParseInt(ranges[0][1], 10, 64)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to parse int %s: %v", ranges[0][1], err))
|
||||
}
|
||||
scaledValue, unit := measurement.Scale(v, ranges[0][2], ranges[0][2])
|
||||
if len(ranges) == 1 {
|
||||
switch match := ranges[0][0]; filter {
|
||||
case match:
|
||||
return func(v int64, u string) bool {
|
||||
sv, su := measurement.Scale(v, u, unit)
|
||||
return su == unit && sv == scaledValue
|
||||
}
|
||||
case match + ":":
|
||||
return func(v int64, u string) bool {
|
||||
sv, su := measurement.Scale(v, u, unit)
|
||||
return su == unit && sv >= scaledValue
|
||||
}
|
||||
case ":" + match:
|
||||
return func(v int64, u string) bool {
|
||||
sv, su := measurement.Scale(v, u, unit)
|
||||
return su == unit && sv <= scaledValue
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if filter != ranges[0][0]+":"+ranges[1][0] {
|
||||
return nil
|
||||
}
|
||||
if v, err = strconv.ParseInt(ranges[1][1], 10, 64); err != nil {
|
||||
panic(fmt.Errorf("failed to parse int %s: %v", ranges[1][1], err))
|
||||
}
|
||||
scaledValue2, unit2 := measurement.Scale(v, ranges[1][2], unit)
|
||||
if unit != unit2 {
|
||||
return nil
|
||||
}
|
||||
return func(v int64, u string) bool {
|
||||
sv, su := measurement.Scale(v, u, unit)
|
||||
return su == unit && sv >= scaledValue && sv <= scaledValue2
|
||||
}
|
||||
}
|
||||
|
||||
func warnNoMatches(match bool, option string, ui plugin.UI) {
|
||||
if !match {
|
||||
ui.PrintErr(option + " expression matched no samples")
|
||||
}
|
||||
}
|
1621
vendor/github.com/google/pprof/internal/driver/driver_test.go
generated
vendored
Normal file
1621
vendor/github.com/google/pprof/internal/driver/driver_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
587
vendor/github.com/google/pprof/internal/driver/fetch.go
generated
vendored
Normal file
587
vendor/github.com/google/pprof/internal/driver/fetch.go
generated
vendored
Normal file
@@ -0,0 +1,587 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/measurement"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
// fetchProfiles fetches and symbolizes the profiles specified by s.
|
||||
// It will merge all the profiles it is able to retrieve, even if
|
||||
// there are some failures. It will return an error if it is unable to
|
||||
// fetch any profiles.
|
||||
func fetchProfiles(s *source, o *plugin.Options) (*profile.Profile, error) {
|
||||
sources := make([]profileSource, 0, len(s.Sources))
|
||||
for _, src := range s.Sources {
|
||||
sources = append(sources, profileSource{
|
||||
addr: src,
|
||||
source: s,
|
||||
})
|
||||
}
|
||||
|
||||
bases := make([]profileSource, 0, len(s.Base))
|
||||
for _, src := range s.Base {
|
||||
bases = append(bases, profileSource{
|
||||
addr: src,
|
||||
source: s,
|
||||
})
|
||||
}
|
||||
|
||||
p, pbase, m, mbase, save, err := grabSourcesAndBases(sources, bases, o.Fetch, o.Obj, o.UI, o.HTTPTransport)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if pbase != nil {
|
||||
if s.DiffBase {
|
||||
pbase.SetLabel("pprof::base", []string{"true"})
|
||||
}
|
||||
if s.Normalize {
|
||||
err := p.Normalize(pbase)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
pbase.Scale(-1)
|
||||
p, m, err = combineProfiles([]*profile.Profile{p, pbase}, []plugin.MappingSources{m, mbase})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Symbolize the merged profile.
|
||||
if err := o.Sym.Symbolize(s.Symbolize, m, p); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.RemoveUninteresting()
|
||||
unsourceMappings(p)
|
||||
|
||||
if s.Comment != "" {
|
||||
p.Comments = append(p.Comments, s.Comment)
|
||||
}
|
||||
|
||||
// Save a copy of the merged profile if there is at least one remote source.
|
||||
if save {
|
||||
dir, err := setTmpDir(o.UI)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
prefix := "pprof."
|
||||
if len(p.Mapping) > 0 && p.Mapping[0].File != "" {
|
||||
prefix += filepath.Base(p.Mapping[0].File) + "."
|
||||
}
|
||||
for _, s := range p.SampleType {
|
||||
prefix += s.Type + "."
|
||||
}
|
||||
|
||||
tempFile, err := newTempFile(dir, prefix, ".pb.gz")
|
||||
if err == nil {
|
||||
if err = p.Write(tempFile); err == nil {
|
||||
o.UI.PrintErr("Saved profile in ", tempFile.Name())
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
o.UI.PrintErr("Could not save profile: ", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := p.CheckValid(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func grabSourcesAndBases(sources, bases []profileSource, fetch plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI, tr http.RoundTripper) (*profile.Profile, *profile.Profile, plugin.MappingSources, plugin.MappingSources, bool, error) {
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
var psrc, pbase *profile.Profile
|
||||
var msrc, mbase plugin.MappingSources
|
||||
var savesrc, savebase bool
|
||||
var errsrc, errbase error
|
||||
var countsrc, countbase int
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
psrc, msrc, savesrc, countsrc, errsrc = chunkedGrab(sources, fetch, obj, ui, tr)
|
||||
}()
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
pbase, mbase, savebase, countbase, errbase = chunkedGrab(bases, fetch, obj, ui, tr)
|
||||
}()
|
||||
wg.Wait()
|
||||
save := savesrc || savebase
|
||||
|
||||
if errsrc != nil {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("problem fetching source profiles: %v", errsrc)
|
||||
}
|
||||
if errbase != nil {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("problem fetching base profiles: %v,", errbase)
|
||||
}
|
||||
if countsrc == 0 {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("failed to fetch any source profiles")
|
||||
}
|
||||
if countbase == 0 && len(bases) > 0 {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("failed to fetch any base profiles")
|
||||
}
|
||||
if want, got := len(sources), countsrc; want != got {
|
||||
ui.PrintErr(fmt.Sprintf("Fetched %d source profiles out of %d", got, want))
|
||||
}
|
||||
if want, got := len(bases), countbase; want != got {
|
||||
ui.PrintErr(fmt.Sprintf("Fetched %d base profiles out of %d", got, want))
|
||||
}
|
||||
|
||||
return psrc, pbase, msrc, mbase, save, nil
|
||||
}
|
||||
|
||||
// chunkedGrab fetches the profiles described in source and merges them into
|
||||
// a single profile. It fetches a chunk of profiles concurrently, with a maximum
|
||||
// chunk size to limit its memory usage.
|
||||
func chunkedGrab(sources []profileSource, fetch plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI, tr http.RoundTripper) (*profile.Profile, plugin.MappingSources, bool, int, error) {
|
||||
const chunkSize = 64
|
||||
|
||||
var p *profile.Profile
|
||||
var msrc plugin.MappingSources
|
||||
var save bool
|
||||
var count int
|
||||
|
||||
for start := 0; start < len(sources); start += chunkSize {
|
||||
end := start + chunkSize
|
||||
if end > len(sources) {
|
||||
end = len(sources)
|
||||
}
|
||||
chunkP, chunkMsrc, chunkSave, chunkCount, chunkErr := concurrentGrab(sources[start:end], fetch, obj, ui, tr)
|
||||
switch {
|
||||
case chunkErr != nil:
|
||||
return nil, nil, false, 0, chunkErr
|
||||
case chunkP == nil:
|
||||
continue
|
||||
case p == nil:
|
||||
p, msrc, save, count = chunkP, chunkMsrc, chunkSave, chunkCount
|
||||
default:
|
||||
p, msrc, chunkErr = combineProfiles([]*profile.Profile{p, chunkP}, []plugin.MappingSources{msrc, chunkMsrc})
|
||||
if chunkErr != nil {
|
||||
return nil, nil, false, 0, chunkErr
|
||||
}
|
||||
if chunkSave {
|
||||
save = true
|
||||
}
|
||||
count += chunkCount
|
||||
}
|
||||
}
|
||||
|
||||
return p, msrc, save, count, nil
|
||||
}
|
||||
|
||||
// concurrentGrab fetches multiple profiles concurrently
|
||||
func concurrentGrab(sources []profileSource, fetch plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI, tr http.RoundTripper) (*profile.Profile, plugin.MappingSources, bool, int, error) {
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(sources))
|
||||
for i := range sources {
|
||||
go func(s *profileSource) {
|
||||
defer wg.Done()
|
||||
s.p, s.msrc, s.remote, s.err = grabProfile(s.source, s.addr, fetch, obj, ui, tr)
|
||||
}(&sources[i])
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
var save bool
|
||||
profiles := make([]*profile.Profile, 0, len(sources))
|
||||
msrcs := make([]plugin.MappingSources, 0, len(sources))
|
||||
for i := range sources {
|
||||
s := &sources[i]
|
||||
if err := s.err; err != nil {
|
||||
ui.PrintErr(s.addr + ": " + err.Error())
|
||||
continue
|
||||
}
|
||||
save = save || s.remote
|
||||
profiles = append(profiles, s.p)
|
||||
msrcs = append(msrcs, s.msrc)
|
||||
*s = profileSource{}
|
||||
}
|
||||
|
||||
if len(profiles) == 0 {
|
||||
return nil, nil, false, 0, nil
|
||||
}
|
||||
|
||||
p, msrc, err := combineProfiles(profiles, msrcs)
|
||||
if err != nil {
|
||||
return nil, nil, false, 0, err
|
||||
}
|
||||
return p, msrc, save, len(profiles), nil
|
||||
}
|
||||
|
||||
func combineProfiles(profiles []*profile.Profile, msrcs []plugin.MappingSources) (*profile.Profile, plugin.MappingSources, error) {
|
||||
// Merge profiles.
|
||||
if err := measurement.ScaleProfiles(profiles); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
p, err := profile.Merge(profiles)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
// Combine mapping sources.
|
||||
msrc := make(plugin.MappingSources)
|
||||
for _, ms := range msrcs {
|
||||
for m, s := range ms {
|
||||
msrc[m] = append(msrc[m], s...)
|
||||
}
|
||||
}
|
||||
return p, msrc, nil
|
||||
}
|
||||
|
||||
type profileSource struct {
|
||||
addr string
|
||||
source *source
|
||||
|
||||
p *profile.Profile
|
||||
msrc plugin.MappingSources
|
||||
remote bool
|
||||
err error
|
||||
}
|
||||
|
||||
func homeEnv() string {
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
return "USERPROFILE"
|
||||
case "plan9":
|
||||
return "home"
|
||||
default:
|
||||
return "HOME"
|
||||
}
|
||||
}
|
||||
|
||||
// setTmpDir prepares the directory to use to save profiles retrieved
|
||||
// remotely. It is selected from PPROF_TMPDIR, defaults to $HOME/pprof, and, if
|
||||
// $HOME is not set, falls back to os.TempDir().
|
||||
func setTmpDir(ui plugin.UI) (string, error) {
|
||||
var dirs []string
|
||||
if profileDir := os.Getenv("PPROF_TMPDIR"); profileDir != "" {
|
||||
dirs = append(dirs, profileDir)
|
||||
}
|
||||
if homeDir := os.Getenv(homeEnv()); homeDir != "" {
|
||||
dirs = append(dirs, filepath.Join(homeDir, "pprof"))
|
||||
}
|
||||
dirs = append(dirs, os.TempDir())
|
||||
for _, tmpDir := range dirs {
|
||||
if err := os.MkdirAll(tmpDir, 0755); err != nil {
|
||||
ui.PrintErr("Could not use temp dir ", tmpDir, ": ", err.Error())
|
||||
continue
|
||||
}
|
||||
return tmpDir, nil
|
||||
}
|
||||
return "", fmt.Errorf("failed to identify temp dir")
|
||||
}
|
||||
|
||||
const testSourceAddress = "pproftest.local"
|
||||
|
||||
// grabProfile fetches a profile. Returns the profile, sources for the
|
||||
// profile mappings, a bool indicating if the profile was fetched
|
||||
// remotely, and an error.
|
||||
func grabProfile(s *source, source string, fetcher plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI, tr http.RoundTripper) (p *profile.Profile, msrc plugin.MappingSources, remote bool, err error) {
|
||||
var src string
|
||||
duration, timeout := time.Duration(s.Seconds)*time.Second, time.Duration(s.Timeout)*time.Second
|
||||
if fetcher != nil {
|
||||
p, src, err = fetcher.Fetch(source, duration, timeout)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
if err != nil || p == nil {
|
||||
// Fetch the profile over HTTP or from a file.
|
||||
p, src, err = fetch(source, duration, timeout, ui, tr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if err = p.CheckValid(); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Update the binary locations from command line and paths.
|
||||
locateBinaries(p, s, obj, ui)
|
||||
|
||||
// Collect the source URL for all mappings.
|
||||
if src != "" {
|
||||
msrc = collectMappingSources(p, src)
|
||||
remote = true
|
||||
if strings.HasPrefix(src, "http://"+testSourceAddress) {
|
||||
// Treat test inputs as local to avoid saving
|
||||
// testcase profiles during driver testing.
|
||||
remote = false
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// collectMappingSources saves the mapping sources of a profile.
|
||||
func collectMappingSources(p *profile.Profile, source string) plugin.MappingSources {
|
||||
ms := plugin.MappingSources{}
|
||||
for _, m := range p.Mapping {
|
||||
src := struct {
|
||||
Source string
|
||||
Start uint64
|
||||
}{
|
||||
source, m.Start,
|
||||
}
|
||||
key := m.BuildID
|
||||
if key == "" {
|
||||
key = m.File
|
||||
}
|
||||
if key == "" {
|
||||
// If there is no build id or source file, use the source as the
|
||||
// mapping file. This will enable remote symbolization for this
|
||||
// mapping, in particular for Go profiles on the legacy format.
|
||||
// The source is reset back to empty string by unsourceMapping
|
||||
// which is called after symbolization is finished.
|
||||
m.File = source
|
||||
key = source
|
||||
}
|
||||
ms[key] = append(ms[key], src)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
|
||||
// unsourceMappings iterates over the mappings in a profile and replaces file
|
||||
// set to the remote source URL by collectMappingSources back to empty string.
|
||||
func unsourceMappings(p *profile.Profile) {
|
||||
for _, m := range p.Mapping {
|
||||
if m.BuildID == "" {
|
||||
if u, err := url.Parse(m.File); err == nil && u.IsAbs() {
|
||||
m.File = ""
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// locateBinaries searches for binary files listed in the profile and, if found,
|
||||
// updates the profile accordingly.
|
||||
func locateBinaries(p *profile.Profile, s *source, obj plugin.ObjTool, ui plugin.UI) {
|
||||
// Construct search path to examine
|
||||
searchPath := os.Getenv("PPROF_BINARY_PATH")
|
||||
if searchPath == "" {
|
||||
// Use $HOME/pprof/binaries as default directory for local symbolization binaries
|
||||
searchPath = filepath.Join(os.Getenv(homeEnv()), "pprof", "binaries")
|
||||
}
|
||||
mapping:
|
||||
for _, m := range p.Mapping {
|
||||
var baseName string
|
||||
if m.File != "" {
|
||||
baseName = filepath.Base(m.File)
|
||||
}
|
||||
|
||||
for _, path := range filepath.SplitList(searchPath) {
|
||||
var fileNames []string
|
||||
if m.BuildID != "" {
|
||||
fileNames = []string{filepath.Join(path, m.BuildID, baseName)}
|
||||
if matches, err := filepath.Glob(filepath.Join(path, m.BuildID, "*")); err == nil {
|
||||
fileNames = append(fileNames, matches...)
|
||||
}
|
||||
fileNames = append(fileNames, filepath.Join(path, m.File, m.BuildID)) // perf path format
|
||||
}
|
||||
if m.File != "" {
|
||||
// Try both the basename and the full path, to support the same directory
|
||||
// structure as the perf symfs option.
|
||||
if baseName != "" {
|
||||
fileNames = append(fileNames, filepath.Join(path, baseName))
|
||||
}
|
||||
fileNames = append(fileNames, filepath.Join(path, m.File))
|
||||
}
|
||||
for _, name := range fileNames {
|
||||
if f, err := obj.Open(name, m.Start, m.Limit, m.Offset); err == nil {
|
||||
defer f.Close()
|
||||
fileBuildID := f.BuildID()
|
||||
if m.BuildID != "" && m.BuildID != fileBuildID {
|
||||
ui.PrintErr("Ignoring local file " + name + ": build-id mismatch (" + m.BuildID + " != " + fileBuildID + ")")
|
||||
} else {
|
||||
m.File = name
|
||||
continue mapping
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(p.Mapping) == 0 {
|
||||
// If there are no mappings, add a fake mapping to attempt symbolization.
|
||||
// This is useful for some profiles generated by the golang runtime, which
|
||||
// do not include any mappings. Symbolization with a fake mapping will only
|
||||
// be successful against a non-PIE binary.
|
||||
m := &profile.Mapping{ID: 1}
|
||||
p.Mapping = []*profile.Mapping{m}
|
||||
for _, l := range p.Location {
|
||||
l.Mapping = m
|
||||
}
|
||||
}
|
||||
// Replace executable filename/buildID with the overrides from source.
|
||||
// Assumes the executable is the first Mapping entry.
|
||||
if execName, buildID := s.ExecName, s.BuildID; execName != "" || buildID != "" {
|
||||
m := p.Mapping[0]
|
||||
if execName != "" {
|
||||
m.File = execName
|
||||
}
|
||||
if buildID != "" {
|
||||
m.BuildID = buildID
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fetch fetches a profile from source, within the timeout specified,
|
||||
// producing messages through the ui. It returns the profile and the
|
||||
// url of the actual source of the profile for remote profiles.
|
||||
func fetch(source string, duration, timeout time.Duration, ui plugin.UI, tr http.RoundTripper) (p *profile.Profile, src string, err error) {
|
||||
var f io.ReadCloser
|
||||
|
||||
if sourceURL, timeout := adjustURL(source, duration, timeout); sourceURL != "" {
|
||||
ui.Print("Fetching profile over HTTP from " + sourceURL)
|
||||
if duration > 0 {
|
||||
ui.Print(fmt.Sprintf("Please wait... (%v)", duration))
|
||||
}
|
||||
f, err = fetchURL(sourceURL, timeout, tr)
|
||||
src = sourceURL
|
||||
} else if isPerfFile(source) {
|
||||
f, err = convertPerfData(source, ui)
|
||||
} else {
|
||||
f, err = os.Open(source)
|
||||
}
|
||||
if err == nil {
|
||||
defer f.Close()
|
||||
p, err = profile.Parse(f)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// fetchURL fetches a profile from a URL using HTTP.
|
||||
func fetchURL(source string, timeout time.Duration, tr http.RoundTripper) (io.ReadCloser, error) {
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: timeout + 5*time.Second,
|
||||
}
|
||||
resp, err := client.Get(source)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("http fetch: %v", err)
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
defer resp.Body.Close()
|
||||
return nil, statusCodeError(resp)
|
||||
}
|
||||
|
||||
return resp.Body, nil
|
||||
}
|
||||
|
||||
func statusCodeError(resp *http.Response) error {
|
||||
if resp.Header.Get("X-Go-Pprof") != "" && strings.Contains(resp.Header.Get("Content-Type"), "text/plain") {
|
||||
// error is from pprof endpoint
|
||||
if body, err := ioutil.ReadAll(resp.Body); err == nil {
|
||||
return fmt.Errorf("server response: %s - %s", resp.Status, body)
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("server response: %s", resp.Status)
|
||||
}
|
||||
|
||||
// isPerfFile checks if a file is in perf.data format. It also returns false
|
||||
// if it encounters an error during the check.
|
||||
func isPerfFile(path string) bool {
|
||||
sourceFile, openErr := os.Open(path)
|
||||
if openErr != nil {
|
||||
return false
|
||||
}
|
||||
defer sourceFile.Close()
|
||||
|
||||
// If the file is the output of a perf record command, it should begin
|
||||
// with the string PERFILE2.
|
||||
perfHeader := []byte("PERFILE2")
|
||||
actualHeader := make([]byte, len(perfHeader))
|
||||
if _, readErr := sourceFile.Read(actualHeader); readErr != nil {
|
||||
return false
|
||||
}
|
||||
return bytes.Equal(actualHeader, perfHeader)
|
||||
}
|
||||
|
||||
// convertPerfData converts the file at path which should be in perf.data format
|
||||
// using the perf_to_profile tool and returns the file containing the
|
||||
// profile.proto formatted data.
|
||||
func convertPerfData(perfPath string, ui plugin.UI) (*os.File, error) {
|
||||
ui.Print(fmt.Sprintf(
|
||||
"Converting %s to a profile.proto... (May take a few minutes)",
|
||||
perfPath))
|
||||
profile, err := newTempFile(os.TempDir(), "pprof_", ".pb.gz")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
deferDeleteTempFile(profile.Name())
|
||||
cmd := exec.Command("perf_to_profile", "-i", perfPath, "-o", profile.Name(), "-f")
|
||||
cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
profile.Close()
|
||||
return nil, fmt.Errorf("failed to convert perf.data file. Try github.com/google/perf_data_converter: %v", err)
|
||||
}
|
||||
return profile, nil
|
||||
}
|
||||
|
||||
// adjustURL validates if a profile source is a URL and returns an
|
||||
// cleaned up URL and the timeout to use for retrieval over HTTP.
|
||||
// If the source cannot be recognized as a URL it returns an empty string.
|
||||
func adjustURL(source string, duration, timeout time.Duration) (string, time.Duration) {
|
||||
u, err := url.Parse(source)
|
||||
if err != nil || (u.Host == "" && u.Scheme != "" && u.Scheme != "file") {
|
||||
// Try adding http:// to catch sources of the form hostname:port/path.
|
||||
// url.Parse treats "hostname" as the scheme.
|
||||
u, err = url.Parse("http://" + source)
|
||||
}
|
||||
if err != nil || u.Host == "" {
|
||||
return "", 0
|
||||
}
|
||||
|
||||
// Apply duration/timeout overrides to URL.
|
||||
values := u.Query()
|
||||
if duration > 0 {
|
||||
values.Set("seconds", fmt.Sprint(int(duration.Seconds())))
|
||||
} else {
|
||||
if urlSeconds := values.Get("seconds"); urlSeconds != "" {
|
||||
if us, err := strconv.ParseInt(urlSeconds, 10, 32); err == nil {
|
||||
duration = time.Duration(us) * time.Second
|
||||
}
|
||||
}
|
||||
}
|
||||
if timeout <= 0 {
|
||||
if duration > 0 {
|
||||
timeout = duration + duration/2
|
||||
} else {
|
||||
timeout = 60 * time.Second
|
||||
}
|
||||
}
|
||||
u.RawQuery = values.Encode()
|
||||
return u.String(), timeout
|
||||
}
|
758
vendor/github.com/google/pprof/internal/driver/fetch_test.go
generated
vendored
Normal file
758
vendor/github.com/google/pprof/internal/driver/fetch_test.go
generated
vendored
Normal file
@@ -0,0 +1,758 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/binutils"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/proftest"
|
||||
"github.com/google/pprof/internal/symbolizer"
|
||||
"github.com/google/pprof/internal/transport"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
func TestSymbolizationPath(t *testing.T) {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("test assumes Unix paths")
|
||||
}
|
||||
|
||||
// Save environment variables to restore after test
|
||||
saveHome := os.Getenv(homeEnv())
|
||||
savePath := os.Getenv("PPROF_BINARY_PATH")
|
||||
|
||||
tempdir, err := ioutil.TempDir("", "home")
|
||||
if err != nil {
|
||||
t.Fatal("creating temp dir: ", err)
|
||||
}
|
||||
defer os.RemoveAll(tempdir)
|
||||
os.MkdirAll(filepath.Join(tempdir, "pprof", "binaries", "abcde10001"), 0700)
|
||||
os.Create(filepath.Join(tempdir, "pprof", "binaries", "abcde10001", "binary"))
|
||||
|
||||
obj := testObj{tempdir}
|
||||
os.Setenv(homeEnv(), tempdir)
|
||||
for _, tc := range []struct {
|
||||
env, file, buildID, want string
|
||||
msgCount int
|
||||
}{
|
||||
{"", "/usr/bin/binary", "", "/usr/bin/binary", 0},
|
||||
{"", "/usr/bin/binary", "fedcb10000", "/usr/bin/binary", 0},
|
||||
{"/usr", "/bin/binary", "", "/usr/bin/binary", 0},
|
||||
{"", "/prod/path/binary", "abcde10001", filepath.Join(tempdir, "pprof/binaries/abcde10001/binary"), 0},
|
||||
{"/alternate/architecture", "/usr/bin/binary", "", "/alternate/architecture/binary", 0},
|
||||
{"/alternate/architecture", "/usr/bin/binary", "abcde10001", "/alternate/architecture/binary", 0},
|
||||
{"/nowhere:/alternate/architecture", "/usr/bin/binary", "fedcb10000", "/usr/bin/binary", 1},
|
||||
{"/nowhere:/alternate/architecture", "/usr/bin/binary", "abcde10002", "/usr/bin/binary", 1},
|
||||
} {
|
||||
os.Setenv("PPROF_BINARY_PATH", tc.env)
|
||||
p := &profile.Profile{
|
||||
Mapping: []*profile.Mapping{
|
||||
{
|
||||
File: tc.file,
|
||||
BuildID: tc.buildID,
|
||||
},
|
||||
},
|
||||
}
|
||||
s := &source{}
|
||||
locateBinaries(p, s, obj, &proftest.TestUI{T: t, Ignore: tc.msgCount})
|
||||
if file := p.Mapping[0].File; file != tc.want {
|
||||
t.Errorf("%s:%s:%s, want %s, got %s", tc.env, tc.file, tc.buildID, tc.want, file)
|
||||
}
|
||||
}
|
||||
os.Setenv(homeEnv(), saveHome)
|
||||
os.Setenv("PPROF_BINARY_PATH", savePath)
|
||||
}
|
||||
|
||||
func TestCollectMappingSources(t *testing.T) {
|
||||
const startAddress uint64 = 0x40000
|
||||
const url = "http://example.com"
|
||||
for _, tc := range []struct {
|
||||
file, buildID string
|
||||
want plugin.MappingSources
|
||||
}{
|
||||
{"/usr/bin/binary", "buildId", mappingSources("buildId", url, startAddress)},
|
||||
{"/usr/bin/binary", "", mappingSources("/usr/bin/binary", url, startAddress)},
|
||||
{"", "", mappingSources(url, url, startAddress)},
|
||||
} {
|
||||
p := &profile.Profile{
|
||||
Mapping: []*profile.Mapping{
|
||||
{
|
||||
File: tc.file,
|
||||
BuildID: tc.buildID,
|
||||
Start: startAddress,
|
||||
},
|
||||
},
|
||||
}
|
||||
got := collectMappingSources(p, url)
|
||||
if !reflect.DeepEqual(got, tc.want) {
|
||||
t.Errorf("%s:%s, want %v, got %v", tc.file, tc.buildID, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnsourceMappings(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
file, buildID, want string
|
||||
}{
|
||||
{"/usr/bin/binary", "buildId", "/usr/bin/binary"},
|
||||
{"http://example.com", "", ""},
|
||||
} {
|
||||
p := &profile.Profile{
|
||||
Mapping: []*profile.Mapping{
|
||||
{
|
||||
File: tc.file,
|
||||
BuildID: tc.buildID,
|
||||
},
|
||||
},
|
||||
}
|
||||
unsourceMappings(p)
|
||||
if got := p.Mapping[0].File; got != tc.want {
|
||||
t.Errorf("%s:%s, want %s, got %s", tc.file, tc.buildID, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type testObj struct {
|
||||
home string
|
||||
}
|
||||
|
||||
func (o testObj) Open(file string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
switch file {
|
||||
case "/alternate/architecture/binary":
|
||||
return testFile{file, "abcde10001"}, nil
|
||||
case "/usr/bin/binary":
|
||||
return testFile{file, "fedcb10000"}, nil
|
||||
case filepath.Join(o.home, "pprof/binaries/abcde10001/binary"):
|
||||
return testFile{file, "abcde10001"}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("not found: %s", file)
|
||||
}
|
||||
func (testObj) Demangler(_ string) func(names []string) (map[string]string, error) {
|
||||
return func(names []string) (map[string]string, error) { return nil, nil }
|
||||
}
|
||||
func (testObj) Disasm(file string, start, end uint64) ([]plugin.Inst, error) { return nil, nil }
|
||||
|
||||
type testFile struct{ name, buildID string }
|
||||
|
||||
func (f testFile) Name() string { return f.name }
|
||||
func (testFile) Base() uint64 { return 0 }
|
||||
func (f testFile) BuildID() string { return f.buildID }
|
||||
func (testFile) SourceLine(addr uint64) ([]plugin.Frame, error) { return nil, nil }
|
||||
func (testFile) Symbols(r *regexp.Regexp, addr uint64) ([]*plugin.Sym, error) { return nil, nil }
|
||||
func (testFile) Close() error { return nil }
|
||||
|
||||
func TestFetch(t *testing.T) {
|
||||
const path = "testdata/"
|
||||
type testcase struct {
|
||||
source, execName string
|
||||
}
|
||||
|
||||
for _, tc := range []testcase{
|
||||
{path + "go.crc32.cpu", ""},
|
||||
{path + "go.nomappings.crash", "/bin/gotest.exe"},
|
||||
{"http://localhost/profile?file=cppbench.cpu", ""},
|
||||
} {
|
||||
p, _, _, err := grabProfile(&source{ExecName: tc.execName}, tc.source, nil, testObj{}, &proftest.TestUI{T: t}, &httpTransport{})
|
||||
if err != nil {
|
||||
t.Fatalf("%s: %s", tc.source, err)
|
||||
}
|
||||
if len(p.Sample) == 0 {
|
||||
t.Errorf("%s: want non-zero samples", tc.source)
|
||||
}
|
||||
if e := tc.execName; e != "" {
|
||||
switch {
|
||||
case len(p.Mapping) == 0 || p.Mapping[0] == nil:
|
||||
t.Errorf("%s: want mapping[0].execName == %s, got no mappings", tc.source, e)
|
||||
case p.Mapping[0].File != e:
|
||||
t.Errorf("%s: want mapping[0].execName == %s, got %s", tc.source, e, p.Mapping[0].File)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchWithBase(t *testing.T) {
|
||||
baseVars := pprofVariables
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
type WantSample struct {
|
||||
values []int64
|
||||
labels map[string][]string
|
||||
}
|
||||
|
||||
const path = "testdata/"
|
||||
type testcase struct {
|
||||
desc string
|
||||
sources []string
|
||||
bases []string
|
||||
diffBases []string
|
||||
normalize bool
|
||||
wantSamples []WantSample
|
||||
wantErrorMsg string
|
||||
}
|
||||
|
||||
testcases := []testcase{
|
||||
{
|
||||
"not normalized base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
nil,
|
||||
false,
|
||||
nil,
|
||||
"",
|
||||
},
|
||||
{
|
||||
"not normalized base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
nil,
|
||||
false,
|
||||
nil,
|
||||
"",
|
||||
},
|
||||
{
|
||||
"not normalized single source, multiple base (all profiles same)",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention", path + "cppbench.contention"},
|
||||
nil,
|
||||
false,
|
||||
[]WantSample{
|
||||
{
|
||||
values: []int64{-2700, -608881724},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -23992},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-200, -179943},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -17778444},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -75976},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-300, -63568134},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
},
|
||||
"",
|
||||
},
|
||||
{
|
||||
"not normalized, different base and source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.small.contention"},
|
||||
nil,
|
||||
false,
|
||||
[]WantSample{
|
||||
{
|
||||
values: []int64{1700, 608878600},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 23992},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{200, 179943},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 17778444},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 75976},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{300, 63568134},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
},
|
||||
"",
|
||||
},
|
||||
{
|
||||
"normalized base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
nil,
|
||||
true,
|
||||
nil,
|
||||
"",
|
||||
},
|
||||
{
|
||||
"normalized single source, multiple base (all profiles same)",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention", path + "cppbench.contention"},
|
||||
nil,
|
||||
true,
|
||||
nil,
|
||||
"",
|
||||
},
|
||||
{
|
||||
"normalized different base and source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.small.contention"},
|
||||
nil,
|
||||
true,
|
||||
[]WantSample{
|
||||
{
|
||||
values: []int64{-229, -370},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{28, 0},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{57, 0},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{28, 80},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{28, 0},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{85, 287},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
},
|
||||
"",
|
||||
},
|
||||
{
|
||||
"not normalized diff base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
nil,
|
||||
[]string{path + "cppbench.contention"},
|
||||
false,
|
||||
[]WantSample{
|
||||
{
|
||||
values: []int64{2700, 608881724},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 23992},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{200, 179943},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 17778444},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{100, 75976},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{300, 63568134},
|
||||
labels: map[string][]string{},
|
||||
},
|
||||
{
|
||||
values: []int64{-2700, -608881724},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -23992},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
{
|
||||
values: []int64{-200, -179943},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -17778444},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
{
|
||||
values: []int64{-100, -75976},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
{
|
||||
values: []int64{-300, -63568134},
|
||||
labels: map[string][]string{"pprof::base": {"true"}},
|
||||
},
|
||||
},
|
||||
"",
|
||||
},
|
||||
{
|
||||
"diff_base and base both specified",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
false,
|
||||
nil,
|
||||
"-base and -diff_base flags cannot both be specified",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
pprofVariables = baseVars.makeCopy()
|
||||
f := testFlags{
|
||||
stringLists: map[string][]string{
|
||||
"base": tc.bases,
|
||||
"diff_base": tc.diffBases,
|
||||
},
|
||||
bools: map[string]bool{
|
||||
"normalize": tc.normalize,
|
||||
},
|
||||
}
|
||||
f.args = tc.sources
|
||||
|
||||
o := setDefaults(&plugin.Options{
|
||||
UI: &proftest.TestUI{T: t, AllowRx: "Local symbolization failed|Some binary filenames not available"},
|
||||
Flagset: f,
|
||||
HTTPTransport: transport.New(nil),
|
||||
})
|
||||
src, _, err := parseFlags(o)
|
||||
|
||||
if tc.wantErrorMsg != "" {
|
||||
if err == nil {
|
||||
t.Fatalf("got nil, want error %q", tc.wantErrorMsg)
|
||||
}
|
||||
|
||||
if gotErrMsg := err.Error(); gotErrMsg != tc.wantErrorMsg {
|
||||
t.Fatalf("got error %q, want error %q", gotErrMsg, tc.wantErrorMsg)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("got error %q, want no error", err)
|
||||
}
|
||||
|
||||
p, err := fetchProfiles(src, o)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("got error %q, want no error", err)
|
||||
}
|
||||
|
||||
if got, want := len(p.Sample), len(tc.wantSamples); got != want {
|
||||
t.Fatalf("got %d samples want %d", got, want)
|
||||
}
|
||||
|
||||
for i, sample := range p.Sample {
|
||||
if !reflect.DeepEqual(tc.wantSamples[i].values, sample.Value) {
|
||||
t.Errorf("for sample %d got values %v, want %v", i, sample.Value, tc.wantSamples[i])
|
||||
}
|
||||
if !reflect.DeepEqual(tc.wantSamples[i].labels, sample.Label) {
|
||||
t.Errorf("for sample %d got labels %v, want %v", i, sample.Label, tc.wantSamples[i].labels)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// mappingSources creates MappingSources map with a single item.
|
||||
func mappingSources(key, source string, start uint64) plugin.MappingSources {
|
||||
return plugin.MappingSources{
|
||||
key: []struct {
|
||||
Source string
|
||||
Start uint64
|
||||
}{
|
||||
{Source: source, Start: start},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type httpTransport struct{}
|
||||
|
||||
func (tr *httpTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
||||
values := req.URL.Query()
|
||||
file := values.Get("file")
|
||||
|
||||
if file == "" {
|
||||
return nil, fmt.Errorf("want .../file?profile, got %s", req.URL.String())
|
||||
}
|
||||
|
||||
t := &http.Transport{}
|
||||
t.RegisterProtocol("file", http.NewFileTransport(http.Dir("testdata/")))
|
||||
|
||||
c := &http.Client{Transport: t}
|
||||
return c.Get("file:///" + file)
|
||||
}
|
||||
|
||||
func closedError() string {
|
||||
if runtime.GOOS == "plan9" {
|
||||
return "listen hungup"
|
||||
}
|
||||
return "use of closed"
|
||||
}
|
||||
|
||||
func TestHTTPSInsecure(t *testing.T) {
|
||||
if runtime.GOOS == "nacl" || runtime.GOOS == "js" {
|
||||
t.Skip("test assumes tcp available")
|
||||
}
|
||||
saveHome := os.Getenv(homeEnv())
|
||||
tempdir, err := ioutil.TempDir("", "home")
|
||||
if err != nil {
|
||||
t.Fatal("creating temp dir: ", err)
|
||||
}
|
||||
defer os.RemoveAll(tempdir)
|
||||
|
||||
// pprof writes to $HOME/pprof by default which is not necessarily
|
||||
// writeable (e.g. on a Debian buildd) so set $HOME to something we
|
||||
// know we can write to for the duration of the test.
|
||||
os.Setenv(homeEnv(), tempdir)
|
||||
defer os.Setenv(homeEnv(), saveHome)
|
||||
|
||||
baseVars := pprofVariables
|
||||
pprofVariables = baseVars.makeCopy()
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
tlsCert, _, _ := selfSignedCert(t, "")
|
||||
tlsConfig := &tls.Config{Certificates: []tls.Certificate{tlsCert}}
|
||||
|
||||
l, err := tls.Listen("tcp", "localhost:0", tlsConfig)
|
||||
if err != nil {
|
||||
t.Fatalf("net.Listen: got error %v, want no error", err)
|
||||
}
|
||||
|
||||
donec := make(chan error, 1)
|
||||
go func(donec chan<- error) {
|
||||
donec <- http.Serve(l, nil)
|
||||
}(donec)
|
||||
defer func() {
|
||||
if got, want := <-donec, closedError(); !strings.Contains(got.Error(), want) {
|
||||
t.Fatalf("Serve got error %v, want %q", got, want)
|
||||
}
|
||||
}()
|
||||
defer l.Close()
|
||||
|
||||
outputTempFile, err := ioutil.TempFile("", "profile_output")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(outputTempFile.Name())
|
||||
defer outputTempFile.Close()
|
||||
|
||||
address := "https+insecure://" + l.Addr().String() + "/debug/pprof/goroutine"
|
||||
s := &source{
|
||||
Sources: []string{address},
|
||||
Seconds: 10,
|
||||
Timeout: 10,
|
||||
Symbolize: "remote",
|
||||
}
|
||||
o := &plugin.Options{
|
||||
Obj: &binutils.Binutils{},
|
||||
UI: &proftest.TestUI{T: t, AllowRx: "Saved profile in"},
|
||||
HTTPTransport: transport.New(nil),
|
||||
}
|
||||
o.Sym = &symbolizer.Symbolizer{Obj: o.Obj, UI: o.UI}
|
||||
p, err := fetchProfiles(s, o)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(p.SampleType) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got empty profile: len(p.SampleType)==0", address)
|
||||
}
|
||||
if len(p.Function) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got non-symbolized profile: len(p.Function)==0", address)
|
||||
}
|
||||
if err := checkProfileHasFunction(p, "TestHTTPSInsecure"); err != nil {
|
||||
t.Fatalf("fetchProfiles(%s) %v", address, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHTTPSWithServerCertFetch(t *testing.T) {
|
||||
if runtime.GOOS == "nacl" || runtime.GOOS == "js" {
|
||||
t.Skip("test assumes tcp available")
|
||||
}
|
||||
saveHome := os.Getenv(homeEnv())
|
||||
tempdir, err := ioutil.TempDir("", "home")
|
||||
if err != nil {
|
||||
t.Fatal("creating temp dir: ", err)
|
||||
}
|
||||
defer os.RemoveAll(tempdir)
|
||||
|
||||
// pprof writes to $HOME/pprof by default which is not necessarily
|
||||
// writeable (e.g. on a Debian buildd) so set $HOME to something we
|
||||
// know we can write to for the duration of the test.
|
||||
os.Setenv(homeEnv(), tempdir)
|
||||
defer os.Setenv(homeEnv(), saveHome)
|
||||
|
||||
baseVars := pprofVariables
|
||||
pprofVariables = baseVars.makeCopy()
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
cert, certBytes, keyBytes := selfSignedCert(t, "localhost")
|
||||
cas := x509.NewCertPool()
|
||||
cas.AppendCertsFromPEM(certBytes)
|
||||
|
||||
tlsConfig := &tls.Config{
|
||||
RootCAs: cas,
|
||||
Certificates: []tls.Certificate{cert},
|
||||
ClientAuth: tls.RequireAndVerifyClientCert,
|
||||
ClientCAs: cas,
|
||||
}
|
||||
|
||||
l, err := tls.Listen("tcp", "localhost:0", tlsConfig)
|
||||
if err != nil {
|
||||
t.Fatalf("net.Listen: got error %v, want no error", err)
|
||||
}
|
||||
|
||||
donec := make(chan error, 1)
|
||||
go func(donec chan<- error) {
|
||||
donec <- http.Serve(l, nil)
|
||||
}(donec)
|
||||
defer func() {
|
||||
if got, want := <-donec, closedError(); !strings.Contains(got.Error(), want) {
|
||||
t.Fatalf("Serve got error %v, want %q", got, want)
|
||||
}
|
||||
}()
|
||||
defer l.Close()
|
||||
|
||||
outputTempFile, err := ioutil.TempFile("", "profile_output")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(outputTempFile.Name())
|
||||
defer outputTempFile.Close()
|
||||
|
||||
// Get port from the address, so request to the server can be made using
|
||||
// the host name specified in certificates.
|
||||
_, portStr, err := net.SplitHostPort(l.Addr().String())
|
||||
if err != nil {
|
||||
t.Fatalf("cannot get port from URL: %v", err)
|
||||
}
|
||||
address := "https://" + "localhost:" + portStr + "/debug/pprof/goroutine"
|
||||
s := &source{
|
||||
Sources: []string{address},
|
||||
Seconds: 10,
|
||||
Timeout: 10,
|
||||
Symbolize: "remote",
|
||||
}
|
||||
|
||||
certTempFile, err := ioutil.TempFile("", "cert_output")
|
||||
if err != nil {
|
||||
t.Errorf("cannot create cert tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(certTempFile.Name())
|
||||
defer certTempFile.Close()
|
||||
certTempFile.Write(certBytes)
|
||||
|
||||
keyTempFile, err := ioutil.TempFile("", "key_output")
|
||||
if err != nil {
|
||||
t.Errorf("cannot create key tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(keyTempFile.Name())
|
||||
defer keyTempFile.Close()
|
||||
keyTempFile.Write(keyBytes)
|
||||
|
||||
f := &testFlags{
|
||||
strings: map[string]string{
|
||||
"tls_cert": certTempFile.Name(),
|
||||
"tls_key": keyTempFile.Name(),
|
||||
"tls_ca": certTempFile.Name(),
|
||||
},
|
||||
}
|
||||
o := &plugin.Options{
|
||||
Obj: &binutils.Binutils{},
|
||||
UI: &proftest.TestUI{T: t, AllowRx: "Saved profile in"},
|
||||
Flagset: f,
|
||||
HTTPTransport: transport.New(f),
|
||||
}
|
||||
|
||||
o.Sym = &symbolizer.Symbolizer{Obj: o.Obj, UI: o.UI, Transport: o.HTTPTransport}
|
||||
p, err := fetchProfiles(s, o)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(p.SampleType) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got empty profile: len(p.SampleType)==0", address)
|
||||
}
|
||||
if len(p.Function) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got non-symbolized profile: len(p.Function)==0", address)
|
||||
}
|
||||
if err := checkProfileHasFunction(p, "TestHTTPSWithServerCertFetch"); err != nil {
|
||||
t.Fatalf("fetchProfiles(%s) %v", address, err)
|
||||
}
|
||||
}
|
||||
|
||||
func checkProfileHasFunction(p *profile.Profile, fname string) error {
|
||||
for _, f := range p.Function {
|
||||
if strings.Contains(f.Name, fname) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("got %s, want function %q", p.String(), fname)
|
||||
}
|
||||
|
||||
// selfSignedCert generates a self-signed certificate, and returns the
|
||||
// generated certificate, and byte arrays containing the certificate and
|
||||
// key associated with the certificate.
|
||||
func selfSignedCert(t *testing.T, host string) (tls.Certificate, []byte, []byte) {
|
||||
privKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate private key: %v", err)
|
||||
}
|
||||
b, err := x509.MarshalECPrivateKey(privKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to marshal private key: %v", err)
|
||||
}
|
||||
bk := pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: b})
|
||||
|
||||
tmpl := x509.Certificate{
|
||||
SerialNumber: big.NewInt(1),
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().Add(10 * time.Minute),
|
||||
IsCA: true,
|
||||
DNSNames: []string{host},
|
||||
}
|
||||
|
||||
b, err = x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, privKey.Public(), privKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cert: %v", err)
|
||||
}
|
||||
bc := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: b})
|
||||
|
||||
cert, err := tls.X509KeyPair(bc, bk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create TLS key pair: %v", err)
|
||||
}
|
||||
return cert, bc, bk
|
||||
}
|
92
vendor/github.com/google/pprof/internal/driver/flags.go
generated
vendored
Normal file
92
vendor/github.com/google/pprof/internal/driver/flags.go
generated
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
// Copyright 2018 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// GoFlags implements the plugin.FlagSet interface.
|
||||
type GoFlags struct {
|
||||
UsageMsgs []string
|
||||
}
|
||||
|
||||
// Bool implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) Bool(o string, d bool, c string) *bool {
|
||||
return flag.Bool(o, d, c)
|
||||
}
|
||||
|
||||
// Int implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) Int(o string, d int, c string) *int {
|
||||
return flag.Int(o, d, c)
|
||||
}
|
||||
|
||||
// Float64 implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) Float64(o string, d float64, c string) *float64 {
|
||||
return flag.Float64(o, d, c)
|
||||
}
|
||||
|
||||
// String implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) String(o, d, c string) *string {
|
||||
return flag.String(o, d, c)
|
||||
}
|
||||
|
||||
// BoolVar implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) BoolVar(b *bool, o string, d bool, c string) {
|
||||
flag.BoolVar(b, o, d, c)
|
||||
}
|
||||
|
||||
// IntVar implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) IntVar(i *int, o string, d int, c string) {
|
||||
flag.IntVar(i, o, d, c)
|
||||
}
|
||||
|
||||
// Float64Var implements the plugin.FlagSet interface.
|
||||
// the value of the flag.
|
||||
func (*GoFlags) Float64Var(f *float64, o string, d float64, c string) {
|
||||
flag.Float64Var(f, o, d, c)
|
||||
}
|
||||
|
||||
// StringVar implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) StringVar(s *string, o, d, c string) {
|
||||
flag.StringVar(s, o, d, c)
|
||||
}
|
||||
|
||||
// StringList implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) StringList(o, d, c string) *[]*string {
|
||||
return &[]*string{flag.String(o, d, c)}
|
||||
}
|
||||
|
||||
// ExtraUsage implements the plugin.FlagSet interface.
|
||||
func (f *GoFlags) ExtraUsage() string {
|
||||
return strings.Join(f.UsageMsgs, "\n")
|
||||
}
|
||||
|
||||
// AddExtraUsage implements the plugin.FlagSet interface.
|
||||
func (f *GoFlags) AddExtraUsage(eu string) {
|
||||
f.UsageMsgs = append(f.UsageMsgs, eu)
|
||||
}
|
||||
|
||||
// Parse implements the plugin.FlagSet interface.
|
||||
func (*GoFlags) Parse(usage func()) []string {
|
||||
flag.Usage = usage
|
||||
flag.Parse()
|
||||
args := flag.Args()
|
||||
if len(args) == 0 {
|
||||
usage()
|
||||
}
|
||||
return args
|
||||
}
|
103
vendor/github.com/google/pprof/internal/driver/flamegraph.go
generated
vendored
Normal file
103
vendor/github.com/google/pprof/internal/driver/flamegraph.go
generated
vendored
Normal file
@@ -0,0 +1,103 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"html/template"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/graph"
|
||||
"github.com/google/pprof/internal/measurement"
|
||||
"github.com/google/pprof/internal/report"
|
||||
)
|
||||
|
||||
type treeNode struct {
|
||||
Name string `json:"n"`
|
||||
FullName string `json:"f"`
|
||||
Cum int64 `json:"v"`
|
||||
CumFormat string `json:"l"`
|
||||
Percent string `json:"p"`
|
||||
Children []*treeNode `json:"c"`
|
||||
}
|
||||
|
||||
// flamegraph generates a web page containing a flamegraph.
|
||||
func (ui *webInterface) flamegraph(w http.ResponseWriter, req *http.Request) {
|
||||
// Force the call tree so that the graph is a tree.
|
||||
// Also do not trim the tree so that the flame graph contains all functions.
|
||||
rpt, errList := ui.makeReport(w, req, []string{"svg"}, "call_tree", "true", "trim", "false")
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
|
||||
// Generate dot graph.
|
||||
g, config := report.GetDOT(rpt)
|
||||
var nodes []*treeNode
|
||||
nroots := 0
|
||||
rootValue := int64(0)
|
||||
nodeArr := []string{}
|
||||
nodeMap := map[*graph.Node]*treeNode{}
|
||||
// Make all nodes and the map, collect the roots.
|
||||
for _, n := range g.Nodes {
|
||||
v := n.CumValue()
|
||||
fullName := n.Info.PrintableName()
|
||||
node := &treeNode{
|
||||
Name: graph.ShortenFunctionName(fullName),
|
||||
FullName: fullName,
|
||||
Cum: v,
|
||||
CumFormat: config.FormatValue(v),
|
||||
Percent: strings.TrimSpace(measurement.Percentage(v, config.Total)),
|
||||
}
|
||||
nodes = append(nodes, node)
|
||||
if len(n.In) == 0 {
|
||||
nodes[nroots], nodes[len(nodes)-1] = nodes[len(nodes)-1], nodes[nroots]
|
||||
nroots++
|
||||
rootValue += v
|
||||
}
|
||||
nodeMap[n] = node
|
||||
// Get all node names into an array.
|
||||
nodeArr = append(nodeArr, n.Info.Name)
|
||||
}
|
||||
// Populate the child links.
|
||||
for _, n := range g.Nodes {
|
||||
node := nodeMap[n]
|
||||
for child := range n.Out {
|
||||
node.Children = append(node.Children, nodeMap[child])
|
||||
}
|
||||
}
|
||||
|
||||
rootNode := &treeNode{
|
||||
Name: "root",
|
||||
FullName: "root",
|
||||
Cum: rootValue,
|
||||
CumFormat: config.FormatValue(rootValue),
|
||||
Percent: strings.TrimSpace(measurement.Percentage(rootValue, config.Total)),
|
||||
Children: nodes[0:nroots],
|
||||
}
|
||||
|
||||
// JSON marshalling flame graph
|
||||
b, err := json.Marshal(rootNode)
|
||||
if err != nil {
|
||||
http.Error(w, "error serializing flame graph", http.StatusInternalServerError)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return
|
||||
}
|
||||
|
||||
ui.render(w, "flamegraph", rpt, errList, config.Labels, webArgs{
|
||||
FlameGraph: template.JS(b),
|
||||
Nodes: nodeArr,
|
||||
})
|
||||
}
|
459
vendor/github.com/google/pprof/internal/driver/interactive.go
generated
vendored
Normal file
459
vendor/github.com/google/pprof/internal/driver/interactive.go
generated
vendored
Normal file
@@ -0,0 +1,459 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/report"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
var commentStart = "//:" // Sentinel for comments on options
|
||||
var tailDigitsRE = regexp.MustCompile("[0-9]+$")
|
||||
|
||||
// interactive starts a shell to read pprof commands.
|
||||
func interactive(p *profile.Profile, o *plugin.Options) error {
|
||||
// Enter command processing loop.
|
||||
o.UI.SetAutoComplete(newCompleter(functionNames(p)))
|
||||
pprofVariables.set("compact_labels", "true")
|
||||
pprofVariables["sample_index"].help += fmt.Sprintf("Or use sample_index=name, with name in %v.\n", sampleTypes(p))
|
||||
|
||||
// Do not wait for the visualizer to complete, to allow multiple
|
||||
// graphs to be visualized simultaneously.
|
||||
interactiveMode = true
|
||||
shortcuts := profileShortcuts(p)
|
||||
|
||||
// Get all groups in pprofVariables to allow for clearer error messages.
|
||||
groups := groupOptions(pprofVariables)
|
||||
|
||||
greetings(p, o.UI)
|
||||
for {
|
||||
input, err := o.UI.ReadLine("(pprof) ")
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
return err
|
||||
}
|
||||
if input == "" {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
for _, input := range shortcuts.expand(input) {
|
||||
// Process assignments of the form variable=value
|
||||
if s := strings.SplitN(input, "=", 2); len(s) > 0 {
|
||||
name := strings.TrimSpace(s[0])
|
||||
var value string
|
||||
if len(s) == 2 {
|
||||
value = s[1]
|
||||
if comment := strings.LastIndex(value, commentStart); comment != -1 {
|
||||
value = value[:comment]
|
||||
}
|
||||
value = strings.TrimSpace(value)
|
||||
}
|
||||
if v := pprofVariables[name]; v != nil {
|
||||
if name == "sample_index" {
|
||||
// Error check sample_index=xxx to ensure xxx is a valid sample type.
|
||||
index, err := p.SampleIndexByName(value)
|
||||
if err != nil {
|
||||
o.UI.PrintErr(err)
|
||||
continue
|
||||
}
|
||||
value = p.SampleType[index].Type
|
||||
}
|
||||
if err := pprofVariables.set(name, value); err != nil {
|
||||
o.UI.PrintErr(err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Allow group=variable syntax by converting into variable="".
|
||||
if v := pprofVariables[value]; v != nil && v.group == name {
|
||||
if err := pprofVariables.set(value, ""); err != nil {
|
||||
o.UI.PrintErr(err)
|
||||
}
|
||||
continue
|
||||
} else if okValues := groups[name]; okValues != nil {
|
||||
o.UI.PrintErr(fmt.Errorf("unrecognized value for %s: %q. Use one of %s", name, value, strings.Join(okValues, ", ")))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
tokens := strings.Fields(input)
|
||||
if len(tokens) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
switch tokens[0] {
|
||||
case "o", "options":
|
||||
printCurrentOptions(p, o.UI)
|
||||
continue
|
||||
case "exit", "quit":
|
||||
return nil
|
||||
case "help":
|
||||
commandHelp(strings.Join(tokens[1:], " "), o.UI)
|
||||
continue
|
||||
}
|
||||
|
||||
args, vars, err := parseCommandLine(tokens)
|
||||
if err == nil {
|
||||
err = generateReportWrapper(p, args, vars, o)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
o.UI.PrintErr(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// groupOptions returns a map containing all non-empty groups
|
||||
// mapped to an array of the option names in that group in
|
||||
// sorted order.
|
||||
func groupOptions(vars variables) map[string][]string {
|
||||
groups := make(map[string][]string)
|
||||
for name, option := range vars {
|
||||
group := option.group
|
||||
if group != "" {
|
||||
groups[group] = append(groups[group], name)
|
||||
}
|
||||
}
|
||||
for _, names := range groups {
|
||||
sort.Strings(names)
|
||||
}
|
||||
return groups
|
||||
}
|
||||
|
||||
var generateReportWrapper = generateReport // For testing purposes.
|
||||
|
||||
// greetings prints a brief welcome and some overall profile
|
||||
// information before accepting interactive commands.
|
||||
func greetings(p *profile.Profile, ui plugin.UI) {
|
||||
numLabelUnits := identifyNumLabelUnits(p, ui)
|
||||
ropt, err := reportOptions(p, numLabelUnits, pprofVariables)
|
||||
if err == nil {
|
||||
rpt := report.New(p, ropt)
|
||||
ui.Print(strings.Join(report.ProfileLabels(rpt), "\n"))
|
||||
if rpt.Total() == 0 && len(p.SampleType) > 1 {
|
||||
ui.Print(`No samples were found with the default sample value type.`)
|
||||
ui.Print(`Try "sample_index" command to analyze different sample values.`, "\n")
|
||||
}
|
||||
}
|
||||
ui.Print(`Entering interactive mode (type "help" for commands, "o" for options)`)
|
||||
}
|
||||
|
||||
// shortcuts represents composite commands that expand into a sequence
|
||||
// of other commands.
|
||||
type shortcuts map[string][]string
|
||||
|
||||
func (a shortcuts) expand(input string) []string {
|
||||
input = strings.TrimSpace(input)
|
||||
if a != nil {
|
||||
if r, ok := a[input]; ok {
|
||||
return r
|
||||
}
|
||||
}
|
||||
return []string{input}
|
||||
}
|
||||
|
||||
var pprofShortcuts = shortcuts{
|
||||
":": []string{"focus=", "ignore=", "hide=", "tagfocus=", "tagignore="},
|
||||
}
|
||||
|
||||
// profileShortcuts creates macros for convenience and backward compatibility.
|
||||
func profileShortcuts(p *profile.Profile) shortcuts {
|
||||
s := pprofShortcuts
|
||||
// Add shortcuts for sample types
|
||||
for _, st := range p.SampleType {
|
||||
command := fmt.Sprintf("sample_index=%s", st.Type)
|
||||
s[st.Type] = []string{command}
|
||||
s["total_"+st.Type] = []string{"mean=0", command}
|
||||
s["mean_"+st.Type] = []string{"mean=1", command}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func sampleTypes(p *profile.Profile) []string {
|
||||
types := make([]string, len(p.SampleType))
|
||||
for i, t := range p.SampleType {
|
||||
types[i] = t.Type
|
||||
}
|
||||
return types
|
||||
}
|
||||
|
||||
func printCurrentOptions(p *profile.Profile, ui plugin.UI) {
|
||||
var args []string
|
||||
type groupInfo struct {
|
||||
set string
|
||||
values []string
|
||||
}
|
||||
groups := make(map[string]*groupInfo)
|
||||
for n, o := range pprofVariables {
|
||||
v := o.stringValue()
|
||||
comment := ""
|
||||
if g := o.group; g != "" {
|
||||
gi, ok := groups[g]
|
||||
if !ok {
|
||||
gi = &groupInfo{}
|
||||
groups[g] = gi
|
||||
}
|
||||
if o.boolValue() {
|
||||
gi.set = n
|
||||
}
|
||||
gi.values = append(gi.values, n)
|
||||
continue
|
||||
}
|
||||
switch {
|
||||
case n == "sample_index":
|
||||
st := sampleTypes(p)
|
||||
if v == "" {
|
||||
// Apply default (last sample index).
|
||||
v = st[len(st)-1]
|
||||
}
|
||||
// Add comments for all sample types in profile.
|
||||
comment = "[" + strings.Join(st, " | ") + "]"
|
||||
case n == "source_path":
|
||||
continue
|
||||
case n == "nodecount" && v == "-1":
|
||||
comment = "default"
|
||||
case v == "":
|
||||
// Add quotes for empty values.
|
||||
v = `""`
|
||||
}
|
||||
if comment != "" {
|
||||
comment = commentStart + " " + comment
|
||||
}
|
||||
args = append(args, fmt.Sprintf(" %-25s = %-20s %s", n, v, comment))
|
||||
}
|
||||
for g, vars := range groups {
|
||||
sort.Strings(vars.values)
|
||||
comment := commentStart + " [" + strings.Join(vars.values, " | ") + "]"
|
||||
args = append(args, fmt.Sprintf(" %-25s = %-20s %s", g, vars.set, comment))
|
||||
}
|
||||
sort.Strings(args)
|
||||
ui.Print(strings.Join(args, "\n"))
|
||||
}
|
||||
|
||||
// parseCommandLine parses a command and returns the pprof command to
|
||||
// execute and a set of variables for the report.
|
||||
func parseCommandLine(input []string) ([]string, variables, error) {
|
||||
cmd, args := input[:1], input[1:]
|
||||
name := cmd[0]
|
||||
|
||||
c := pprofCommands[name]
|
||||
if c == nil {
|
||||
// Attempt splitting digits on abbreviated commands (eg top10)
|
||||
if d := tailDigitsRE.FindString(name); d != "" && d != name {
|
||||
name = name[:len(name)-len(d)]
|
||||
cmd[0], args = name, append([]string{d}, args...)
|
||||
c = pprofCommands[name]
|
||||
}
|
||||
}
|
||||
if c == nil {
|
||||
return nil, nil, fmt.Errorf("unrecognized command: %q", name)
|
||||
}
|
||||
|
||||
if c.hasParam {
|
||||
if len(args) == 0 {
|
||||
return nil, nil, fmt.Errorf("command %s requires an argument", name)
|
||||
}
|
||||
cmd = append(cmd, args[0])
|
||||
args = args[1:]
|
||||
}
|
||||
|
||||
// Copy the variables as options set in the command line are not persistent.
|
||||
vcopy := pprofVariables.makeCopy()
|
||||
|
||||
var focus, ignore string
|
||||
for i := 0; i < len(args); i++ {
|
||||
t := args[i]
|
||||
if _, err := strconv.ParseInt(t, 10, 32); err == nil {
|
||||
vcopy.set("nodecount", t)
|
||||
continue
|
||||
}
|
||||
switch t[0] {
|
||||
case '>':
|
||||
outputFile := t[1:]
|
||||
if outputFile == "" {
|
||||
i++
|
||||
if i >= len(args) {
|
||||
return nil, nil, fmt.Errorf("unexpected end of line after >")
|
||||
}
|
||||
outputFile = args[i]
|
||||
}
|
||||
vcopy.set("output", outputFile)
|
||||
case '-':
|
||||
if t == "--cum" || t == "-cum" {
|
||||
vcopy.set("cum", "t")
|
||||
continue
|
||||
}
|
||||
ignore = catRegex(ignore, t[1:])
|
||||
default:
|
||||
focus = catRegex(focus, t)
|
||||
}
|
||||
}
|
||||
|
||||
if name == "tags" {
|
||||
updateFocusIgnore(vcopy, "tag", focus, ignore)
|
||||
} else {
|
||||
updateFocusIgnore(vcopy, "", focus, ignore)
|
||||
}
|
||||
|
||||
if vcopy["nodecount"].intValue() == -1 && (name == "text" || name == "top") {
|
||||
vcopy.set("nodecount", "10")
|
||||
}
|
||||
|
||||
return cmd, vcopy, nil
|
||||
}
|
||||
|
||||
func updateFocusIgnore(v variables, prefix, f, i string) {
|
||||
if f != "" {
|
||||
focus := prefix + "focus"
|
||||
v.set(focus, catRegex(v[focus].value, f))
|
||||
}
|
||||
|
||||
if i != "" {
|
||||
ignore := prefix + "ignore"
|
||||
v.set(ignore, catRegex(v[ignore].value, i))
|
||||
}
|
||||
}
|
||||
|
||||
func catRegex(a, b string) string {
|
||||
if a != "" && b != "" {
|
||||
return a + "|" + b
|
||||
}
|
||||
return a + b
|
||||
}
|
||||
|
||||
// commandHelp displays help and usage information for all Commands
|
||||
// and Variables or a specific Command or Variable.
|
||||
func commandHelp(args string, ui plugin.UI) {
|
||||
if args == "" {
|
||||
help := usage(false)
|
||||
help = help + `
|
||||
: Clear focus/ignore/hide/tagfocus/tagignore
|
||||
|
||||
type "help <cmd|option>" for more information
|
||||
`
|
||||
|
||||
ui.Print(help)
|
||||
return
|
||||
}
|
||||
|
||||
if c := pprofCommands[args]; c != nil {
|
||||
ui.Print(c.help(args))
|
||||
return
|
||||
}
|
||||
|
||||
if v := pprofVariables[args]; v != nil {
|
||||
ui.Print(v.help + "\n")
|
||||
return
|
||||
}
|
||||
|
||||
ui.PrintErr("Unknown command: " + args)
|
||||
}
|
||||
|
||||
// newCompleter creates an autocompletion function for a set of commands.
|
||||
func newCompleter(fns []string) func(string) string {
|
||||
return func(line string) string {
|
||||
v := pprofVariables
|
||||
switch tokens := strings.Fields(line); len(tokens) {
|
||||
case 0:
|
||||
// Nothing to complete
|
||||
case 1:
|
||||
// Single token -- complete command name
|
||||
if match := matchVariableOrCommand(v, tokens[0]); match != "" {
|
||||
return match
|
||||
}
|
||||
case 2:
|
||||
if tokens[0] == "help" {
|
||||
if match := matchVariableOrCommand(v, tokens[1]); match != "" {
|
||||
return tokens[0] + " " + match
|
||||
}
|
||||
return line
|
||||
}
|
||||
fallthrough
|
||||
default:
|
||||
// Multiple tokens -- complete using functions, except for tags
|
||||
if cmd := pprofCommands[tokens[0]]; cmd != nil && tokens[0] != "tags" {
|
||||
lastTokenIdx := len(tokens) - 1
|
||||
lastToken := tokens[lastTokenIdx]
|
||||
if strings.HasPrefix(lastToken, "-") {
|
||||
lastToken = "-" + functionCompleter(lastToken[1:], fns)
|
||||
} else {
|
||||
lastToken = functionCompleter(lastToken, fns)
|
||||
}
|
||||
return strings.Join(append(tokens[:lastTokenIdx], lastToken), " ")
|
||||
}
|
||||
}
|
||||
return line
|
||||
}
|
||||
}
|
||||
|
||||
// matchVariableOrCommand attempts to match a string token to the prefix of a Command.
|
||||
func matchVariableOrCommand(v variables, token string) string {
|
||||
token = strings.ToLower(token)
|
||||
found := ""
|
||||
for cmd := range pprofCommands {
|
||||
if strings.HasPrefix(cmd, token) {
|
||||
if found != "" {
|
||||
return ""
|
||||
}
|
||||
found = cmd
|
||||
}
|
||||
}
|
||||
for variable := range v {
|
||||
if strings.HasPrefix(variable, token) {
|
||||
if found != "" {
|
||||
return ""
|
||||
}
|
||||
found = variable
|
||||
}
|
||||
}
|
||||
return found
|
||||
}
|
||||
|
||||
// functionCompleter replaces provided substring with a function
|
||||
// name retrieved from a profile if a single match exists. Otherwise,
|
||||
// it returns unchanged substring. It defaults to no-op if the profile
|
||||
// is not specified.
|
||||
func functionCompleter(substring string, fns []string) string {
|
||||
found := ""
|
||||
for _, fName := range fns {
|
||||
if strings.Contains(fName, substring) {
|
||||
if found != "" {
|
||||
return substring
|
||||
}
|
||||
found = fName
|
||||
}
|
||||
}
|
||||
if found != "" {
|
||||
return found
|
||||
}
|
||||
return substring
|
||||
}
|
||||
|
||||
func functionNames(p *profile.Profile) []string {
|
||||
var fns []string
|
||||
for _, fn := range p.Function {
|
||||
fns = append(fns, fn.Name)
|
||||
}
|
||||
return fns
|
||||
}
|
316
vendor/github.com/google/pprof/internal/driver/interactive_test.go
generated
vendored
Normal file
316
vendor/github.com/google/pprof/internal/driver/interactive_test.go
generated
vendored
Normal file
@@ -0,0 +1,316 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/proftest"
|
||||
"github.com/google/pprof/internal/report"
|
||||
"github.com/google/pprof/internal/transport"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
func TestShell(t *testing.T) {
|
||||
p := &profile.Profile{}
|
||||
generateReportWrapper = checkValue
|
||||
defer func() { generateReportWrapper = generateReport }()
|
||||
|
||||
// Use test commands and variables to exercise interactive processing
|
||||
var savedCommands commands
|
||||
savedCommands, pprofCommands = pprofCommands, testCommands
|
||||
defer func() { pprofCommands = savedCommands }()
|
||||
|
||||
savedVariables := pprofVariables
|
||||
defer func() { pprofVariables = savedVariables }()
|
||||
|
||||
// Random interleave of independent scripts
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
|
||||
// pass in HTTPTransport when setting defaults, because otherwise default
|
||||
// transport will try to add flags to the default flag set.
|
||||
o := setDefaults(&plugin.Options{HTTPTransport: transport.New(nil)})
|
||||
o.UI = newUI(t, interleave(script, 0))
|
||||
if err := interactive(p, o); err != nil {
|
||||
t.Error("first attempt:", err)
|
||||
}
|
||||
// Random interleave of independent scripts
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
o.UI = newUI(t, interleave(script, 1))
|
||||
if err := interactive(p, o); err != nil {
|
||||
t.Error("second attempt:", err)
|
||||
}
|
||||
|
||||
// Random interleave of independent scripts with shortcuts
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
var scScript []string
|
||||
pprofShortcuts, scScript = makeShortcuts(interleave(script, 2), 1)
|
||||
o.UI = newUI(t, scScript)
|
||||
if err := interactive(p, o); err != nil {
|
||||
t.Error("first shortcut attempt:", err)
|
||||
}
|
||||
|
||||
// Random interleave of independent scripts with shortcuts
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
pprofShortcuts, scScript = makeShortcuts(interleave(script, 1), 2)
|
||||
o.UI = newUI(t, scScript)
|
||||
if err := interactive(p, o); err != nil {
|
||||
t.Error("second shortcut attempt:", err)
|
||||
}
|
||||
|
||||
// Group with invalid value
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
ui := &proftest.TestUI{
|
||||
T: t,
|
||||
Input: []string{"cumulative=this"},
|
||||
AllowRx: `unrecognized value for cumulative: "this". Use one of cum, flat`,
|
||||
}
|
||||
o.UI = ui
|
||||
if err := interactive(p, o); err != nil {
|
||||
t.Error("invalid group value:", err)
|
||||
}
|
||||
// Confirm error message written out once.
|
||||
if ui.NumAllowRxMatches != 1 {
|
||||
t.Errorf("want error message to be printed 1 time, got %v", ui.NumAllowRxMatches)
|
||||
}
|
||||
// Verify propagation of IO errors
|
||||
pprofVariables = testVariables(savedVariables)
|
||||
o.UI = newUI(t, []string{"**error**"})
|
||||
if err := interactive(p, o); err == nil {
|
||||
t.Error("expected IO error, got nil")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
var testCommands = commands{
|
||||
"check": &command{report.Raw, nil, nil, true, "", ""},
|
||||
}
|
||||
|
||||
func testVariables(base variables) variables {
|
||||
v := base.makeCopy()
|
||||
|
||||
v["b"] = &variable{boolKind, "f", "", ""}
|
||||
v["bb"] = &variable{boolKind, "f", "", ""}
|
||||
v["i"] = &variable{intKind, "0", "", ""}
|
||||
v["ii"] = &variable{intKind, "0", "", ""}
|
||||
v["f"] = &variable{floatKind, "0", "", ""}
|
||||
v["ff"] = &variable{floatKind, "0", "", ""}
|
||||
v["s"] = &variable{stringKind, "", "", ""}
|
||||
v["ss"] = &variable{stringKind, "", "", ""}
|
||||
|
||||
v["ta"] = &variable{boolKind, "f", "radio", ""}
|
||||
v["tb"] = &variable{boolKind, "f", "radio", ""}
|
||||
v["tc"] = &variable{boolKind, "t", "radio", ""}
|
||||
|
||||
return v
|
||||
}
|
||||
|
||||
// script contains sequences of commands to be executed for testing. Commands
|
||||
// are split by semicolon and interleaved randomly, so they must be
|
||||
// independent from each other.
|
||||
var script = []string{
|
||||
"bb=true;bb=false;check bb=false;bb=yes;check bb=true",
|
||||
"b=1;check b=true;b=n;check b=false",
|
||||
"i=-1;i=-2;check i=-2;i=999999;check i=999999",
|
||||
"check ii=0;ii=-1;check ii=-1;ii=100;check ii=100",
|
||||
"f=-1;f=-2.5;check f=-2.5;f=0.0001;check f=0.0001",
|
||||
"check ff=0;ff=-1.01;check ff=-1.01;ff=100;check ff=100",
|
||||
"s=one;s=two;check s=two",
|
||||
"ss=tree;check ss=tree;ss=;check ss;ss=forest;check ss=forest",
|
||||
"ta=true;check ta=true;check tb=false;check tc=false;tb=1;check tb=true;check ta=false;check tc=false;tc=yes;check tb=false;check ta=false;check tc=true",
|
||||
}
|
||||
|
||||
func makeShortcuts(input []string, seed int) (shortcuts, []string) {
|
||||
rand.Seed(int64(seed))
|
||||
|
||||
s := shortcuts{}
|
||||
var output, chunk []string
|
||||
for _, l := range input {
|
||||
chunk = append(chunk, l)
|
||||
switch rand.Intn(3) {
|
||||
case 0:
|
||||
// Create a macro for commands in 'chunk'.
|
||||
macro := fmt.Sprintf("alias%d", len(s))
|
||||
s[macro] = chunk
|
||||
output = append(output, macro)
|
||||
chunk = nil
|
||||
case 1:
|
||||
// Append commands in 'chunk' by themselves.
|
||||
output = append(output, chunk...)
|
||||
chunk = nil
|
||||
case 2:
|
||||
// Accumulate commands into 'chunk'
|
||||
}
|
||||
}
|
||||
output = append(output, chunk...)
|
||||
return s, output
|
||||
}
|
||||
|
||||
func newUI(t *testing.T, input []string) plugin.UI {
|
||||
return &proftest.TestUI{
|
||||
T: t,
|
||||
Input: input,
|
||||
}
|
||||
}
|
||||
|
||||
func checkValue(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) error {
|
||||
if len(cmd) != 2 {
|
||||
return fmt.Errorf("expected len(cmd)==2, got %v", cmd)
|
||||
}
|
||||
|
||||
input := cmd[1]
|
||||
args := strings.SplitN(input, "=", 2)
|
||||
if len(args) == 0 {
|
||||
return fmt.Errorf("unexpected empty input")
|
||||
}
|
||||
name, value := args[0], ""
|
||||
if len(args) == 2 {
|
||||
value = args[1]
|
||||
}
|
||||
|
||||
gotv := vars[name]
|
||||
if gotv == nil {
|
||||
return fmt.Errorf("Could not find variable named %s", name)
|
||||
}
|
||||
|
||||
if got := gotv.stringValue(); got != value {
|
||||
return fmt.Errorf("Variable %s, want %s, got %s", name, value, got)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func interleave(input []string, seed int) []string {
|
||||
var inputs [][]string
|
||||
for _, s := range input {
|
||||
inputs = append(inputs, strings.Split(s, ";"))
|
||||
}
|
||||
rand.Seed(int64(seed))
|
||||
var output []string
|
||||
for len(inputs) > 0 {
|
||||
next := rand.Intn(len(inputs))
|
||||
output = append(output, inputs[next][0])
|
||||
if tail := inputs[next][1:]; len(tail) > 0 {
|
||||
inputs[next] = tail
|
||||
} else {
|
||||
inputs = append(inputs[:next], inputs[next+1:]...)
|
||||
}
|
||||
}
|
||||
return output
|
||||
}
|
||||
|
||||
func TestInteractiveCommands(t *testing.T) {
|
||||
type interactiveTestcase struct {
|
||||
input string
|
||||
want map[string]string
|
||||
}
|
||||
|
||||
testcases := []interactiveTestcase{
|
||||
{
|
||||
"top 10 --cum focus1 -ignore focus2",
|
||||
map[string]string{
|
||||
"functions": "true",
|
||||
"nodecount": "10",
|
||||
"cum": "true",
|
||||
"focus": "focus1|focus2",
|
||||
"ignore": "ignore",
|
||||
},
|
||||
},
|
||||
{
|
||||
"top10 --cum focus1 -ignore focus2",
|
||||
map[string]string{
|
||||
"functions": "true",
|
||||
"nodecount": "10",
|
||||
"cum": "true",
|
||||
"focus": "focus1|focus2",
|
||||
"ignore": "ignore",
|
||||
},
|
||||
},
|
||||
{
|
||||
"dot",
|
||||
map[string]string{
|
||||
"functions": "true",
|
||||
"nodecount": "80",
|
||||
"cum": "false",
|
||||
},
|
||||
},
|
||||
{
|
||||
"tags -ignore1 -ignore2 focus1 >out",
|
||||
map[string]string{
|
||||
"functions": "true",
|
||||
"nodecount": "80",
|
||||
"cum": "false",
|
||||
"output": "out",
|
||||
"tagfocus": "focus1",
|
||||
"tagignore": "ignore1|ignore2",
|
||||
},
|
||||
},
|
||||
{
|
||||
"weblist find -test",
|
||||
map[string]string{
|
||||
"functions": "false",
|
||||
"addresses": "true",
|
||||
"noinlines": "true",
|
||||
"nodecount": "0",
|
||||
"cum": "false",
|
||||
"flat": "true",
|
||||
"ignore": "test",
|
||||
},
|
||||
},
|
||||
{
|
||||
"callgrind fun -ignore >out",
|
||||
map[string]string{
|
||||
"functions": "false",
|
||||
"addresses": "true",
|
||||
"nodecount": "0",
|
||||
"cum": "false",
|
||||
"flat": "true",
|
||||
"output": "out",
|
||||
},
|
||||
},
|
||||
{
|
||||
"999",
|
||||
nil, // Error
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
cmd, vars, err := parseCommandLine(strings.Fields(tc.input))
|
||||
if tc.want == nil && err != nil {
|
||||
// Error expected
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
t.Errorf("failed on %q: %v", tc.input, err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Get report output format
|
||||
c := pprofCommands[cmd[0]]
|
||||
if c == nil {
|
||||
t.Errorf("unexpected nil command")
|
||||
}
|
||||
vars = applyCommandOverrides(cmd[0], c.format, vars)
|
||||
|
||||
for n, want := range tc.want {
|
||||
if got := vars[n].stringValue(); got != want {
|
||||
t.Errorf("failed on %q, cmd=%q, %s got %s, want %s", tc.input, cmd, n, got, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
100
vendor/github.com/google/pprof/internal/driver/options.go
generated
vendored
Normal file
100
vendor/github.com/google/pprof/internal/driver/options.go
generated
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/binutils"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/symbolizer"
|
||||
"github.com/google/pprof/internal/transport"
|
||||
)
|
||||
|
||||
// setDefaults returns a new plugin.Options with zero fields sets to
|
||||
// sensible defaults.
|
||||
func setDefaults(o *plugin.Options) *plugin.Options {
|
||||
d := &plugin.Options{}
|
||||
if o != nil {
|
||||
*d = *o
|
||||
}
|
||||
if d.Writer == nil {
|
||||
d.Writer = oswriter{}
|
||||
}
|
||||
if d.Flagset == nil {
|
||||
d.Flagset = &GoFlags{}
|
||||
}
|
||||
if d.Obj == nil {
|
||||
d.Obj = &binutils.Binutils{}
|
||||
}
|
||||
if d.UI == nil {
|
||||
d.UI = &stdUI{r: bufio.NewReader(os.Stdin)}
|
||||
}
|
||||
if d.HTTPTransport == nil {
|
||||
d.HTTPTransport = transport.New(d.Flagset)
|
||||
}
|
||||
if d.Sym == nil {
|
||||
d.Sym = &symbolizer.Symbolizer{Obj: d.Obj, UI: d.UI, Transport: d.HTTPTransport}
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
type stdUI struct {
|
||||
r *bufio.Reader
|
||||
}
|
||||
|
||||
func (ui *stdUI) ReadLine(prompt string) (string, error) {
|
||||
os.Stdout.WriteString(prompt)
|
||||
return ui.r.ReadString('\n')
|
||||
}
|
||||
|
||||
func (ui *stdUI) Print(args ...interface{}) {
|
||||
ui.fprint(os.Stderr, args)
|
||||
}
|
||||
|
||||
func (ui *stdUI) PrintErr(args ...interface{}) {
|
||||
ui.fprint(os.Stderr, args)
|
||||
}
|
||||
|
||||
func (ui *stdUI) IsTerminal() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (ui *stdUI) WantBrowser() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (ui *stdUI) SetAutoComplete(func(string) string) {
|
||||
}
|
||||
|
||||
func (ui *stdUI) fprint(f *os.File, args []interface{}) {
|
||||
text := fmt.Sprint(args...)
|
||||
if !strings.HasSuffix(text, "\n") {
|
||||
text += "\n"
|
||||
}
|
||||
f.WriteString(text)
|
||||
}
|
||||
|
||||
// oswriter implements the Writer interface using a regular file.
|
||||
type oswriter struct{}
|
||||
|
||||
func (oswriter) Open(name string) (io.WriteCloser, error) {
|
||||
f, err := os.Create(name)
|
||||
return f, err
|
||||
}
|
80
vendor/github.com/google/pprof/internal/driver/svg.go
generated
vendored
Normal file
80
vendor/github.com/google/pprof/internal/driver/svg.go
generated
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/third_party/svgpan"
|
||||
)
|
||||
|
||||
var (
|
||||
viewBox = regexp.MustCompile(`<svg\s*width="[^"]+"\s*height="[^"]+"\s*viewBox="[^"]+"`)
|
||||
graphID = regexp.MustCompile(`<g id="graph\d"`)
|
||||
svgClose = regexp.MustCompile(`</svg>`)
|
||||
)
|
||||
|
||||
// massageSVG enhances the SVG output from DOT to provide better
|
||||
// panning inside a web browser. It uses the svgpan library, which is
|
||||
// embedded into the svgpan.JSSource variable.
|
||||
func massageSVG(svg string) string {
|
||||
// Work around for dot bug which misses quoting some ampersands,
|
||||
// resulting on unparsable SVG.
|
||||
svg = strings.Replace(svg, "&;", "&;", -1)
|
||||
|
||||
// Dot's SVG output is
|
||||
//
|
||||
// <svg width="___" height="___"
|
||||
// viewBox="___" xmlns=...>
|
||||
// <g id="graph0" transform="...">
|
||||
// ...
|
||||
// </g>
|
||||
// </svg>
|
||||
//
|
||||
// Change it to
|
||||
//
|
||||
// <svg width="100%" height="100%"
|
||||
// xmlns=...>
|
||||
|
||||
// <script type="text/ecmascript"><![CDATA[` ..$(svgpan.JSSource)... `]]></script>`
|
||||
// <g id="viewport" transform="translate(0,0)">
|
||||
// <g id="graph0" transform="...">
|
||||
// ...
|
||||
// </g>
|
||||
// </g>
|
||||
// </svg>
|
||||
|
||||
if loc := viewBox.FindStringIndex(svg); loc != nil {
|
||||
svg = svg[:loc[0]] +
|
||||
`<svg width="100%" height="100%"` +
|
||||
svg[loc[1]:]
|
||||
}
|
||||
|
||||
if loc := graphID.FindStringIndex(svg); loc != nil {
|
||||
svg = svg[:loc[0]] +
|
||||
`<script type="text/ecmascript"><![CDATA[` + string(svgpan.JSSource) + `]]></script>` +
|
||||
`<g id="viewport" transform="scale(0.5,0.5) translate(0,0)">` +
|
||||
svg[loc[0]:]
|
||||
}
|
||||
|
||||
if loc := svgClose.FindStringIndex(svg); loc != nil {
|
||||
svg = svg[:loc[0]] +
|
||||
`</g>` +
|
||||
svg[loc[0]:]
|
||||
}
|
||||
|
||||
return svg
|
||||
}
|
54
vendor/github.com/google/pprof/internal/driver/tempfile.go
generated
vendored
Normal file
54
vendor/github.com/google/pprof/internal/driver/tempfile.go
generated
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright 2014 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// newTempFile returns a new output file in dir with the provided prefix and suffix.
|
||||
func newTempFile(dir, prefix, suffix string) (*os.File, error) {
|
||||
for index := 1; index < 10000; index++ {
|
||||
path := filepath.Join(dir, fmt.Sprintf("%s%03d%s", prefix, index, suffix))
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
return os.Create(path)
|
||||
}
|
||||
}
|
||||
// Give up
|
||||
return nil, fmt.Errorf("could not create file of the form %s%03d%s", prefix, 1, suffix)
|
||||
}
|
||||
|
||||
var tempFiles []string
|
||||
var tempFilesMu = sync.Mutex{}
|
||||
|
||||
// deferDeleteTempFile marks a file to be deleted by next call to Cleanup()
|
||||
func deferDeleteTempFile(path string) {
|
||||
tempFilesMu.Lock()
|
||||
tempFiles = append(tempFiles, path)
|
||||
tempFilesMu.Unlock()
|
||||
}
|
||||
|
||||
// cleanupTempFiles removes any temporary files selected for deferred cleaning.
|
||||
func cleanupTempFiles() {
|
||||
tempFilesMu.Lock()
|
||||
for _, f := range tempFiles {
|
||||
os.Remove(f)
|
||||
}
|
||||
tempFiles = nil
|
||||
tempFilesMu.Unlock()
|
||||
}
|
24
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.contention
generated
vendored
Normal file
24
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.contention
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
--- contentionz 1 ---
|
||||
cycles/second = 3201000000
|
||||
sampling period = 100
|
||||
ms since reset = 16502830
|
||||
discarded samples = 0
|
||||
19490304 27 @ 0xbccc97 0xc61202 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
768 1 @ 0xbccc97 0xa42dc7 0xa456e4 0x7fcdc2ff214e
|
||||
5760 2 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87eab 0xb8814c 0x4e969d 0x4faa17 0x4fc5f6 0x4fd028 0x4fd230 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
569088 1 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87f08 0xb8814c 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
2432 1 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87eab 0xb8814c 0x7aa74c 0x7ab844 0x7ab914 0x79e9e9 0x79e326 0x4d299e 0x4d4b7b 0x4b7be8 0x4b7ff1 0x4d2dae 0x79e80a
|
||||
2034816 3 @ 0xbccc97 0xb82f0f 0xb83003 0xb87d50 0xc635f0 0x42ecc3 0x42e14c 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
--- Memory map: ---
|
||||
00400000-00fcb000: cppbench_server_main
|
||||
7fcdc231e000-7fcdc2321000: /libnss_cache-2.15.so
|
||||
7fcdc2522000-7fcdc252e000: /libnss_files-2.15.so
|
||||
7fcdc272f000-7fcdc28dd000: /libc-2.15.so
|
||||
7fcdc2ae7000-7fcdc2be2000: /libm-2.15.so
|
||||
7fcdc2de3000-7fcdc2dea000: /librt-2.15.so
|
||||
7fcdc2feb000-7fcdc3003000: /libpthread-2.15.so
|
||||
7fcdc3208000-7fcdc320a000: /libdl-2.15.so
|
||||
7fcdc340c000-7fcdc3415000: /libcrypt-2.15.so
|
||||
7fcdc3645000-7fcdc3669000: /ld-2.15.so
|
||||
7fff86bff000-7fff86c00000: [vdso]
|
||||
ffffffffff600000-ffffffffff601000: [vsyscall]
|
BIN
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.cpu
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.cpu
generated
vendored
Normal file
Binary file not shown.
19
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.small.contention
generated
vendored
Normal file
19
vendor/github.com/google/pprof/internal/driver/testdata/cppbench.small.contention
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
--- contentionz 1 ---
|
||||
cycles/second = 3201000000
|
||||
sampling period = 100
|
||||
ms since reset = 16502830
|
||||
discarded samples = 0
|
||||
100 10 @ 0xbccc97 0xc61202 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
--- Memory map: ---
|
||||
00400000-00fcb000: cppbench_server_main
|
||||
7fcdc231e000-7fcdc2321000: /libnss_cache-2.15.so
|
||||
7fcdc2522000-7fcdc252e000: /libnss_files-2.15.so
|
||||
7fcdc272f000-7fcdc28dd000: /libc-2.15.so
|
||||
7fcdc2ae7000-7fcdc2be2000: /libm-2.15.so
|
||||
7fcdc2de3000-7fcdc2dea000: /librt-2.15.so
|
||||
7fcdc2feb000-7fcdc3003000: /libpthread-2.15.so
|
||||
7fcdc3208000-7fcdc320a000: /libdl-2.15.so
|
||||
7fcdc340c000-7fcdc3415000: /libcrypt-2.15.so
|
||||
7fcdc3645000-7fcdc3669000: /ld-2.15.so
|
||||
7fff86bff000-7fff86c00000: [vdso]
|
||||
ffffffffff600000-ffffffffff601000: [vsyscall]
|
17
vendor/github.com/google/pprof/internal/driver/testdata/file1000.src
generated
vendored
Normal file
17
vendor/github.com/google/pprof/internal/driver/testdata/file1000.src
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
line6
|
||||
line7
|
||||
line8
|
||||
line9
|
||||
line0
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
|
||||
|
17
vendor/github.com/google/pprof/internal/driver/testdata/file2000.src
generated
vendored
Normal file
17
vendor/github.com/google/pprof/internal/driver/testdata/file2000.src
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
line6
|
||||
line7
|
||||
line8
|
||||
line9
|
||||
line0
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
|
||||
|
17
vendor/github.com/google/pprof/internal/driver/testdata/file3000.src
generated
vendored
Normal file
17
vendor/github.com/google/pprof/internal/driver/testdata/file3000.src
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
line6
|
||||
line7
|
||||
line8
|
||||
line9
|
||||
line0
|
||||
line1
|
||||
line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
|
||||
|
BIN
vendor/github.com/google/pprof/internal/driver/testdata/go.crc32.cpu
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/internal/driver/testdata/go.crc32.cpu
generated
vendored
Normal file
Binary file not shown.
BIN
vendor/github.com/google/pprof/internal/driver/testdata/go.nomappings.crash
generated
vendored
Normal file
BIN
vendor/github.com/google/pprof/internal/driver/testdata/go.nomappings.crash
generated
vendored
Normal file
Binary file not shown.
10
vendor/github.com/google/pprof/internal/driver/testdata/pprof.contention.cum.files.dot
generated
vendored
Normal file
10
vendor/github.com/google/pprof/internal/driver/testdata/pprof.contention.cum.files.dot
generated
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid-contention" [shape=box fontsize=16 label="Build ID: buildid-contention\lComment #1\lComment #2\lType: delay\lShowing nodes accounting for 149.50ms, 100% of 149.50ms total\l"] }
|
||||
N1 [label="file3000.src\n32.77ms (21.92%)\nof 149.50ms (100%)" id="node1" fontsize=20 shape=box tooltip="testdata/file3000.src (149.50ms)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N2 [label="file1000.src\n51.20ms (34.25%)" id="node2" fontsize=23 shape=box tooltip="testdata/file1000.src (51.20ms)" color="#b23100" fillcolor="#eddbd5"]
|
||||
N3 [label="file2000.src\n65.54ms (43.84%)\nof 75.78ms (50.68%)" id="node3" fontsize=24 shape=box tooltip="testdata/file2000.src (75.78ms)" color="#b22000" fillcolor="#edd9d5"]
|
||||
N1 -> N3 [label=" 75.78ms" weight=51 penwidth=3 color="#b22000" tooltip="testdata/file3000.src -> testdata/file2000.src (75.78ms)" labeltooltip="testdata/file3000.src -> testdata/file2000.src (75.78ms)"]
|
||||
N1 -> N2 [label=" 40.96ms" weight=28 penwidth=2 color="#b23900" tooltip="testdata/file3000.src -> testdata/file1000.src (40.96ms)" labeltooltip="testdata/file3000.src -> testdata/file1000.src (40.96ms)"]
|
||||
N3 -> N2 [label=" 10.24ms" weight=7 color="#b29775" tooltip="testdata/file2000.src -> testdata/file1000.src (10.24ms)" labeltooltip="testdata/file2000.src -> testdata/file1000.src (10.24ms)"]
|
||||
}
|
9
vendor/github.com/google/pprof/internal/driver/testdata/pprof.contention.flat.addresses.dot.focus.ignore
generated
vendored
Normal file
9
vendor/github.com/google/pprof/internal/driver/testdata/pprof.contention.flat.addresses.dot.focus.ignore
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid-contention" [shape=box fontsize=16 label="Build ID: buildid-contention\lComment #1\lComment #2\lType: delay\lActive filters:\l focus=[X1]000\l ignore=[X3]002\lShowing nodes accounting for 40.96ms, 27.40% of 149.50ms total\l"] }
|
||||
N1 [label="0000000000001000\nline1000\nfile1000.src:1\n40.96ms (27.40%)" id="node1" fontsize=24 shape=box tooltip="0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N2 [label="0000000000003001\nline3000\nfile3000.src:5\n0 of 40.96ms (27.40%)" id="node2" fontsize=8 shape=box tooltip="0000000000003001 line3000 testdata/file3000.src:5 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N3 [label="0000000000003001\nline3001\nfile3000.src:3\n0 of 40.96ms (27.40%)" id="node3" fontsize=8 shape=box tooltip="0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N2 -> N3 [label=" 40.96ms\n (inline)" weight=28 penwidth=2 color="#b23900" tooltip="0000000000003001 line3000 testdata/file3000.src:5 -> 0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)" labeltooltip="0000000000003001 line3000 testdata/file3000.src:5 -> 0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)"]
|
||||
N3 -> N1 [label=" 40.96ms" weight=28 penwidth=2 color="#b23900" tooltip="0000000000003001 line3001 testdata/file3000.src:3 -> 0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)" labeltooltip="0000000000003001 line3001 testdata/file3000.src:3 -> 0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)"]
|
||||
}
|
99
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.call_tree.callgrind
generated
vendored
Normal file
99
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.call_tree.callgrind
generated
vendored
Normal file
@@ -0,0 +1,99 @@
|
||||
positions: instr line
|
||||
events: cpu(ms)
|
||||
|
||||
ob=(1) /path/to/testbinary
|
||||
fl=(1) testdata/file1000.src
|
||||
fn=(1) line1000
|
||||
0x1000 1 1000
|
||||
* 1 100
|
||||
|
||||
ob=(1)
|
||||
fl=(2) testdata/file2000.src
|
||||
fn=(2) line2001
|
||||
+4096 9 10
|
||||
|
||||
ob=(1)
|
||||
fl=(3) testdata/file3000.src
|
||||
fn=(3) line3002
|
||||
+4096 2 10
|
||||
cfl=(2)
|
||||
cfn=(4) line2000 [1/2]
|
||||
calls=0 * 4
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(2)
|
||||
fn=(5) line2000
|
||||
-4096 4 0
|
||||
cfl=(2)
|
||||
cfn=(6) line2001 [2/2]
|
||||
calls=0 -4096 9
|
||||
* * 1000
|
||||
* 4 0
|
||||
cfl=(2)
|
||||
cfn=(7) line2001 [1/2]
|
||||
calls=0 * 9
|
||||
* * 10
|
||||
|
||||
ob=(1)
|
||||
fl=(2)
|
||||
fn=(2)
|
||||
* 9 0
|
||||
cfl=(1)
|
||||
cfn=(8) line1000 [1/2]
|
||||
calls=0 -4096 1
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9) line3000
|
||||
+4096 6 0
|
||||
cfl=(3)
|
||||
cfn=(10) line3001 [1/2]
|
||||
calls=0 +4096 5
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(11) line3001
|
||||
* 5 0
|
||||
cfl=(3)
|
||||
cfn=(12) line3002 [1/2]
|
||||
calls=0 * 2
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(13) line3001 [2/2]
|
||||
calls=0 +1 8
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(11)
|
||||
* 8 0
|
||||
cfl=(1)
|
||||
cfn=(14) line1000 [2/2]
|
||||
calls=0 -8193 1
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(15) line3002 [2/2]
|
||||
calls=0 +1 5
|
||||
* * 10
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(3)
|
||||
* 5 0
|
||||
cfl=(2)
|
||||
cfn=(16) line2000 [2/2]
|
||||
calls=0 -4098 4
|
||||
* * 10
|
88
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.callgrind
generated
vendored
Normal file
88
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.callgrind
generated
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
positions: instr line
|
||||
events: cpu(ms)
|
||||
|
||||
ob=(1) /path/to/testbinary
|
||||
fl=(1) testdata/file1000.src
|
||||
fn=(1) line1000
|
||||
0x1000 1 1100
|
||||
|
||||
ob=(1)
|
||||
fl=(2) testdata/file2000.src
|
||||
fn=(2) line2001
|
||||
+4096 9 10
|
||||
cfl=(1)
|
||||
cfn=(1)
|
||||
calls=0 * 1
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(3) testdata/file3000.src
|
||||
fn=(3) line3002
|
||||
+4096 2 10
|
||||
cfl=(2)
|
||||
cfn=(4) line2000
|
||||
calls=0 * 4
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(2)
|
||||
fn=(4)
|
||||
-4096 4 0
|
||||
cfl=(2)
|
||||
cfn=(2)
|
||||
calls=0 -4096 9
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(5) line3000
|
||||
+4096 6 0
|
||||
cfl=(3)
|
||||
cfn=(6) line3001
|
||||
calls=0 +4096 5
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(6)
|
||||
* 5 0
|
||||
cfl=(3)
|
||||
cfn=(3)
|
||||
calls=0 * 2
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(5)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(6)
|
||||
calls=0 +1 8
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(6)
|
||||
* 8 0
|
||||
cfl=(1)
|
||||
cfn=(1)
|
||||
calls=0 -8193 1
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(5)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(3)
|
||||
calls=0 +1 5
|
||||
* * 10
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(3)
|
||||
* 5 0
|
||||
cfl=(2)
|
||||
cfn=(4)
|
||||
calls=0 -4098 4
|
||||
* * 10
|
1
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.comments
generated
vendored
Normal file
1
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.comments
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
some-comment
|
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.focus.hide
generated
vendored
Normal file
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.focus.hide
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Active filters:
|
||||
focus=[12]00
|
||||
hide=line[X3]0
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
0 0% 98.21% 1.01s 90.18% line2000 testdata/file2000.src:4
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src:9 (inline)
|
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.hide
generated
vendored
Normal file
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.hide
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
Active filters:
|
||||
hide=line[X3]0
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
0 0% 98.21% 1.01s 90.18% line2000 testdata/file2000.src:4
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src:9 (inline)
|
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.show
generated
vendored
Normal file
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.show
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
Active filters:
|
||||
show=[12]00
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
0 0% 98.21% 1.01s 90.18% line2000 testdata/file2000.src:4
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src:9 (inline)
|
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.topproto.hide
generated
vendored
Normal file
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.topproto.hide
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
Active filters:
|
||||
hide=mangled[X3]0
|
||||
Showing nodes accounting for 1s, 100% of 1s total
|
||||
flat flat% sum% cum cum%
|
||||
1s 100% 100% 1s 100% mangled1000 testdata/file1000.src:1
|
16
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.tree.show_from
generated
vendored
Normal file
16
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.tree.show_from
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
Active filters:
|
||||
show_from=line2
|
||||
Showing nodes accounting for 1.01s, 90.18% of 1.12s total
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
0 0% 0% 1.01s 90.18% | line2000 testdata/file2000.src:4
|
||||
1.01s 100% | line2001 testdata/file2000.src:9 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
1.01s 100% | line2000 testdata/file2000.src:4 (inline)
|
||||
0.01s 0.89% 0.89% 1.01s 90.18% | line2001 testdata/file2000.src:9
|
||||
1s 99.01% | line1000 testdata/file1000.src:1
|
||||
----------------------------------------------------------+-------------
|
||||
1s 100% | line2001 testdata/file2000.src:9
|
||||
1s 89.29% 90.18% 1s 89.29% | line1000 testdata/file1000.src:1
|
||||
----------------------------------------------------------+-------------
|
14
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.disasm
generated
vendored
Normal file
14
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.disasm
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
Total: 1.12s
|
||||
ROUTINE ======================== line1000
|
||||
1.10s 1.10s (flat, cum) 98.21% of Total
|
||||
1.10s 1.10s 1000: instruction one ;line1000 file1000.src:1
|
||||
. . 1001: instruction two ;file1000.src:1
|
||||
. . 1002: instruction three ;file1000.src:2
|
||||
. . 1003: instruction four ;file1000.src:1
|
||||
ROUTINE ======================== line3000
|
||||
10ms 1.12s (flat, cum) 100% of Total
|
||||
10ms 1.01s 3000: instruction one ;line3000 file3000.src:6
|
||||
. 100ms 3001: instruction two ;line3000 file3000.src:9
|
||||
. 10ms 3002: instruction three
|
||||
. . 3003: instruction four
|
||||
. . 3004: instruction five
|
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.noinlines.text
generated
vendored
Normal file
7
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.noinlines.text
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
Dropped 1 node (cum <= 0.06s)
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% 0000000000001000 line1000 testdata/file1000.src:1
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% 0000000000002000 line2000 testdata/file2000.src:4
|
||||
0.01s 0.89% 100% 1.01s 90.18% 0000000000003000 line3000 testdata/file3000.src:6
|
||||
0 0% 100% 0.10s 8.93% 0000000000003001 line3000 testdata/file3000.src:9
|
106
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.weblist
generated
vendored
Normal file
106
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.addresses.weblist
generated
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Pprof listing</title>
|
||||
<style type="text/css">
|
||||
body {
|
||||
font-family: sans-serif;
|
||||
}
|
||||
h1 {
|
||||
font-size: 1.5em;
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
.legend {
|
||||
font-size: 1.25em;
|
||||
}
|
||||
.line, .nop, .unimportant {
|
||||
color: #aaaaaa;
|
||||
}
|
||||
.inlinesrc {
|
||||
color: #000066;
|
||||
}
|
||||
.deadsrc {
|
||||
cursor: pointer;
|
||||
}
|
||||
.deadsrc:hover {
|
||||
background-color: #eeeeee;
|
||||
}
|
||||
.livesrc {
|
||||
color: #0000ff;
|
||||
cursor: pointer;
|
||||
}
|
||||
.livesrc:hover {
|
||||
background-color: #eeeeee;
|
||||
}
|
||||
.asm {
|
||||
color: #008800;
|
||||
display: none;
|
||||
}
|
||||
</style>
|
||||
<script type="text/javascript">
|
||||
function pprof_toggle_asm(e) {
|
||||
var target;
|
||||
if (!e) e = window.event;
|
||||
if (e.target) target = e.target;
|
||||
else if (e.srcElement) target = e.srcElement;
|
||||
|
||||
if (target) {
|
||||
var asm = target.nextSibling;
|
||||
if (asm && asm.className == "asm") {
|
||||
asm.style.display = (asm.style.display == "block" ? "" : "block");
|
||||
e.preventDefault();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div class="legend">File: testbinary<br>
|
||||
Type: cpu<br>
|
||||
Duration: 10s, Total samples = 1.12s (11.20%)<br>Total: 1.12s</div><h2>line1000</h2><p class="filename">testdata/file1000.src</p>
|
||||
<pre onClick="pprof_toggle_asm(event)">
|
||||
Total: 1.10s 1.10s (flat, cum) 98.21%
|
||||
<span class=line> 1</span> <span class=deadsrc> 1.10s 1.10s line1 </span><span class=asm> 1.10s 1.10s 1000: instruction one <span class=unimportant>file1000.src:1</span>
|
||||
. . 1001: instruction two <span class=unimportant>file1000.src:1</span>
|
||||
⋮
|
||||
. . 1003: instruction four <span class=unimportant>file1000.src:1</span>
|
||||
</span>
|
||||
<span class=line> 2</span> <span class=deadsrc> . . line2 </span><span class=asm> . . 1002: instruction three <span class=unimportant>file1000.src:2</span>
|
||||
</span>
|
||||
<span class=line> 3</span> <span class=nop> . . line3 </span>
|
||||
<span class=line> 4</span> <span class=nop> . . line4 </span>
|
||||
<span class=line> 5</span> <span class=nop> . . line5 </span>
|
||||
<span class=line> 6</span> <span class=nop> . . line6 </span>
|
||||
<span class=line> 7</span> <span class=nop> . . line7 </span>
|
||||
</pre>
|
||||
<h2>line3000</h2><p class="filename">testdata/file3000.src</p>
|
||||
<pre onClick="pprof_toggle_asm(event)">
|
||||
Total: 10ms 1.12s (flat, cum) 100%
|
||||
<span class=line> 1</span> <span class=nop> . . line1 </span>
|
||||
<span class=line> 2</span> <span class=nop> . . line2 </span>
|
||||
<span class=line> 3</span> <span class=nop> . . line3 </span>
|
||||
<span class=line> 4</span> <span class=nop> . . line4 </span>
|
||||
<span class=line> 5</span> <span class=nop> . . line5 </span>
|
||||
<span class=line> 6</span> <span class=deadsrc> 10ms 1.01s line6 </span><span class=asm> 10ms 1.01s 3000: instruction one <span class=unimportant>file3000.src:6</span>
|
||||
</span>
|
||||
<span class=line> 7</span> <span class=nop> . . line7 </span>
|
||||
<span class=line> 8</span> <span class=nop> . . line8 </span>
|
||||
<span class=line> 9</span> <span class=deadsrc> . 110ms line9 </span><span class=asm> . 100ms 3001: instruction two <span class=unimportant>file3000.src:9</span>
|
||||
. 10ms 3002: instruction three <span class=unimportant>file3000.src:9</span>
|
||||
. . 3003: instruction four <span class=unimportant></span>
|
||||
. . 3004: instruction five <span class=unimportant></span>
|
||||
</span>
|
||||
<span class=line> 10</span> <span class=nop> . . line0 </span>
|
||||
<span class=line> 11</span> <span class=nop> . . line1 </span>
|
||||
<span class=line> 12</span> <span class=nop> . . line2 </span>
|
||||
<span class=line> 13</span> <span class=nop> . . line3 </span>
|
||||
<span class=line> 14</span> <span class=nop> . . line4 </span>
|
||||
</pre>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.filefunctions.noinlines.text
generated
vendored
Normal file
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.filefunctions.noinlines.text
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2000 testdata/file2000.src
|
||||
0.01s 0.89% 100% 1.12s 100% line3000 testdata/file3000.src
|
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.call_tree.dot
generated
vendored
Normal file
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.call_tree.dot
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
digraph "testbinary" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "File: testbinary" [shape=box fontsize=16 label="File: testbinary\lType: cpu\lDuration: 10s, Total samples = 1.12s (11.20%)\lShowing nodes accounting for 1.11s, 99.11% of 1.12s total\lDropped 3 nodes (cum <= 0.06s)\l" tooltip="testbinary"] }
|
||||
N1 [label="line1000\n1s (89.29%)" id="node1" fontsize=24 shape=box tooltip="line1000 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N1_0 [label = "key1:tag1\nkey2:tag1" id="N1_0" fontsize=8 shape=box3d tooltip="1s"]
|
||||
N1 -> N1_0 [label=" 1s" weight=100 tooltip="1s" labeltooltip="1s"]
|
||||
N2 [label="line3000\n0 of 1.12s (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (1.12s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line3001\n0 of 1.11s (99.11%)" id="node3" fontsize=8 shape=box tooltip="line3001 (1.11s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="line1000\n0.10s (8.93%)" id="node4" fontsize=14 shape=box tooltip="line1000 (0.10s)" color="#b28b62" fillcolor="#ede8e2"]
|
||||
N4_0 [label = "key1:tag2\nkey3:tag2" id="N4_0" fontsize=8 shape=box3d tooltip="0.10s"]
|
||||
N4 -> N4_0 [label=" 0.10s" weight=100 tooltip="0.10s" labeltooltip="0.10s"]
|
||||
N5 [label="line3002\n0.01s (0.89%)\nof 1.01s (90.18%)" id="node5" fontsize=10 shape=box tooltip="line3002 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N6 [label="line2000\n0 of 1s (89.29%)" id="node6" fontsize=8 shape=box tooltip="line2000 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N7 [label="line2001\n0 of 1s (89.29%)" id="node7" fontsize=8 shape=box tooltip="line2001 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 -> N3 [label=" 1.11s\n (inline)" weight=100 penwidth=5 color="#b20000" tooltip="line3000 -> line3001 (1.11s)" labeltooltip="line3000 -> line3001 (1.11s)"]
|
||||
N3 -> N5 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line3001 -> line3002 (1.01s)" labeltooltip="line3001 -> line3002 (1.01s)"]
|
||||
N6 -> N7 [label=" 1s\n (inline)" weight=90 penwidth=5 color="#b20500" tooltip="line2000 -> line2001 (1s)" labeltooltip="line2000 -> line2001 (1s)"]
|
||||
N7 -> N1 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line2001 -> line1000 (1s)" labeltooltip="line2001 -> line1000 (1s)"]
|
||||
N5 -> N6 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line3002 -> line2000 (1s)" labeltooltip="line3002 -> line2000 (1s)"]
|
||||
N3 -> N4 [label=" 0.10s" weight=9 color="#b28b62" tooltip="line3001 -> line1000 (0.10s)" labeltooltip="line3001 -> line1000 (0.10s)"]
|
||||
}
|
20
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.dot
generated
vendored
Normal file
20
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.dot
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
digraph "testbinary" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "File: testbinary" [shape=box fontsize=16 label="File: testbinary\lType: cpu\lDuration: 10s, Total samples = 1.12s (11.20%)\lShowing nodes accounting for 1.12s, 100% of 1.12s total\l" tooltip="testbinary"] }
|
||||
N1 [label="line1000\n1.10s (98.21%)" id="node1" fontsize=24 shape=box tooltip="line1000 (1.10s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N1_0 [label = "key1:tag1\nkey2:tag1" id="N1_0" fontsize=8 shape=box3d tooltip="1s"]
|
||||
N1 -> N1_0 [label=" 1s" weight=100 tooltip="1s" labeltooltip="1s"]
|
||||
N1_1 [label = "key1:tag2\nkey3:tag2" id="N1_1" fontsize=8 shape=box3d tooltip="0.10s"]
|
||||
N1 -> N1_1 [label=" 0.10s" weight=100 tooltip="0.10s" labeltooltip="0.10s"]
|
||||
N2 [label="line3000\n0 of 1.12s (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (1.12s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line3001\n0 of 1.11s (99.11%)" id="node3" fontsize=8 shape=box tooltip="line3001 (1.11s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="line3002\n0.01s (0.89%)\nof 1.02s (91.07%)" id="node4" fontsize=10 shape=box tooltip="line3002 (1.02s)" color="#b20400" fillcolor="#edd6d5"]
|
||||
N5 [label="line2001\n0.01s (0.89%)\nof 1.01s (90.18%)" id="node5" fontsize=10 shape=box tooltip="line2001 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N6 [label="line2000\n0 of 1.01s (90.18%)" id="node6" fontsize=8 shape=box tooltip="line2000 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 -> N3 [label=" 1.11s\n (inline)" weight=100 penwidth=5 color="#b20000" tooltip="line3000 -> line3001 (1.11s)" labeltooltip="line3000 -> line3001 (1.11s)"]
|
||||
N6 -> N5 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line2000 -> line2001 (1.01s)" labeltooltip="line2000 -> line2001 (1.01s)"]
|
||||
N3 -> N4 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line3001 -> line3002 (1.01s)" labeltooltip="line3001 -> line3002 (1.01s)"]
|
||||
N4 -> N6 [label=" 1.01s" weight=91 penwidth=5 color="#b20500" tooltip="line3002 -> line2000 (1.01s)" labeltooltip="line3002 -> line2000 (1.01s)"]
|
||||
N5 -> N1 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line2001 -> line1000 (1s)" labeltooltip="line2001 -> line1000 (1s)"]
|
||||
N3 -> N1 [label=" 0.10s" weight=9 color="#b28b62" tooltip="line3001 -> line1000 (0.10s)" labeltooltip="line3001 -> line1000 (0.10s)"]
|
||||
}
|
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.noinlines.text
generated
vendored
Normal file
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.noinlines.text
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2000
|
||||
0.01s 0.89% 100% 1.12s 100% line3000
|
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.text
generated
vendored
Normal file
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.text
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 (inline)
|
||||
0.01s 0.89% 100% 1.02s 91.07% line3002 (inline)
|
||||
0 0% 100% 1.01s 90.18% line2000
|
||||
0 0% 100% 1.12s 100% line3000
|
||||
0 0% 100% 1.11s 99.11% line3001 (inline)
|
3
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.lines.topproto
generated
vendored
Normal file
3
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.lines.topproto
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
Showing nodes accounting for 1s, 100% of 1s total
|
||||
flat flat% sum% cum cum%
|
||||
1s 100% 100% 1s 100% mangled1000 testdata/file1000.src:1
|
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.peek
generated
vendored
Normal file
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.peek
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
1.01s 100% | line2000 (inline)
|
||||
0.01s 0.89% 0.89% 1.01s 90.18% | line2001
|
||||
1s 99.01% | line1000
|
||||
----------------------------------------------------------+-------------
|
||||
1.11s 100% | line3000 (inline)
|
||||
0 0% 0.89% 1.11s 99.11% | line3001
|
||||
1.01s 90.99% | line3002 (inline)
|
||||
0.10s 9.01% | line1000
|
||||
----------------------------------------------------------+-------------
|
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags
generated
vendored
Normal file
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
key1: Total 1.1s
|
||||
1.0s (89.29%): tag1
|
||||
100.0ms ( 8.93%): tag2
|
||||
10.0ms ( 0.89%): tag3
|
||||
10.0ms ( 0.89%): tag4
|
||||
|
||||
key2: Total 1.0s
|
||||
1.0s (99.02%): tag1
|
||||
10.0ms ( 0.98%): tag2
|
||||
|
||||
key3: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags.focus.ignore
generated
vendored
Normal file
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags.focus.ignore
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
key1: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
||||
key3: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
32
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.traces
generated
vendored
Normal file
32
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.traces
generated
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
File: testbinary
|
||||
Type: cpu
|
||||
Duration: 10s, Total samples = 1.12s (11.20%)
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag1
|
||||
key2: tag1
|
||||
1s line1000
|
||||
line2001
|
||||
line2000
|
||||
line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag2
|
||||
key3: tag2
|
||||
100ms line1000
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag3
|
||||
key2: tag2
|
||||
10ms line2001
|
||||
line2000
|
||||
line3002
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag4
|
||||
key2: tag1
|
||||
10ms line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
17
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpusmall.flat.addresses.tree
generated
vendored
Normal file
17
vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpusmall.flat.addresses.tree
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
Showing nodes accounting for 4s, 100% of 4s total
|
||||
Showing top 4 nodes out of 5
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
1s 100% | 0000000000003000 [testbinary]
|
||||
1s 25.00% 25.00% 1s 25.00% | 0000000000001000 [testbinary]
|
||||
----------------------------------------------------------+-------------
|
||||
1s 25.00% 50.00% 2s 50.00% | 0000000000003000 [testbinary]
|
||||
1s 50.00% | 0000000000001000 [testbinary]
|
||||
----------------------------------------------------------+-------------
|
||||
1s 100% | 0000000000005000 [testbinary]
|
||||
1s 25.00% 75.00% 1s 25.00% | 0000000000004000 [testbinary]
|
||||
----------------------------------------------------------+-------------
|
||||
1s 25.00% 100% 2s 50.00% | 0000000000005000 [testbinary]
|
||||
1s 50.00% | 0000000000004000 [testbinary]
|
||||
----------------------------------------------------------+-------------
|
88
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.callgrind
generated
vendored
Normal file
88
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.callgrind
generated
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
positions: instr line
|
||||
events: inuse_space(MB)
|
||||
|
||||
ob=
|
||||
fl=(1) testdata/file2000.src
|
||||
fn=(1) line2001
|
||||
0x2000 2 62
|
||||
cfl=(2) testdata/file1000.src
|
||||
cfn=(2) line1000
|
||||
calls=0 0x1000 1
|
||||
* * 0
|
||||
|
||||
ob=
|
||||
fl=(3) testdata/file3000.src
|
||||
fn=(3) line3002
|
||||
+4096 3 31
|
||||
cfl=(1)
|
||||
cfn=(4) line2000
|
||||
calls=0 * 3
|
||||
* * 0
|
||||
|
||||
ob=
|
||||
fl=(2)
|
||||
fn=(2)
|
||||
-8192 1 4
|
||||
|
||||
ob=
|
||||
fl=(1)
|
||||
fn=(4)
|
||||
+4096 3 0
|
||||
cfl=(1)
|
||||
cfn=(1)
|
||||
calls=0 +4096 2
|
||||
* * 63
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(5) line3000
|
||||
+4096 4 0
|
||||
cfl=(3)
|
||||
cfn=(6) line3001
|
||||
calls=0 +4096 2
|
||||
* * 32
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(6)
|
||||
* 2 0
|
||||
cfl=(3)
|
||||
cfn=(3)
|
||||
calls=0 * 3
|
||||
* * 32
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(5)
|
||||
+1 4 0
|
||||
cfl=(3)
|
||||
cfn=(6)
|
||||
calls=0 +1 2
|
||||
* * 3
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(6)
|
||||
* 2 0
|
||||
cfl=(2)
|
||||
cfn=(2)
|
||||
calls=0 -8193 1
|
||||
* * 3
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(5)
|
||||
+1 4 0
|
||||
cfl=(3)
|
||||
cfn=(3)
|
||||
calls=0 +1 3
|
||||
* * 62
|
||||
|
||||
ob=
|
||||
fl=(3)
|
||||
fn=(3)
|
||||
* 3 0
|
||||
cfl=(1)
|
||||
cfn=(4)
|
||||
calls=0 -4098 3
|
||||
* * 62
|
2
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.comments
generated
vendored
Normal file
2
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.comments
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
comment
|
||||
#hidden comment
|
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.cum.lines.tree.focus
generated
vendored
Normal file
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.cum.lines.tree.focus
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
Active filters:
|
||||
focus=[24]00
|
||||
Showing nodes accounting for 62.50MB, 63.37% of 98.63MB total
|
||||
Dropped 2 nodes (cum <= 4.93MB)
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3002 testdata/file3000.src:3
|
||||
0 0% 0% 63.48MB 64.36% | line2000 testdata/file2000.src:3
|
||||
63.48MB 100% | line2001 testdata/file2000.src:2 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line2000 testdata/file2000.src:3 (inline)
|
||||
62.50MB 63.37% 63.37% 63.48MB 64.36% | line2001 testdata/file2000.src:2
|
||||
----------------------------------------------------------+-------------
|
||||
0 0% 63.37% 63.48MB 64.36% | line3000 testdata/file3000.src:4
|
||||
63.48MB 100% | line3002 testdata/file3000.src:3 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3000 testdata/file3000.src:4 (inline)
|
||||
0 0% 63.37% 63.48MB 64.36% | line3002 testdata/file3000.src:3
|
||||
63.48MB 100% | line2000 testdata/file2000.src:3
|
||||
----------------------------------------------------------+-------------
|
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.cum.relative_percentages.tree.focus
generated
vendored
Normal file
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.cum.relative_percentages.tree.focus
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
Active filters:
|
||||
focus=[24]00
|
||||
Showing nodes accounting for 62.50MB, 98.46% of 63.48MB total
|
||||
Dropped 2 nodes (cum <= 3.17MB)
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3002
|
||||
0 0% 0% 63.48MB 100% | line2000
|
||||
63.48MB 100% | line2001 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line2000 (inline)
|
||||
62.50MB 98.46% 98.46% 63.48MB 100% | line2001
|
||||
----------------------------------------------------------+-------------
|
||||
0 0% 98.46% 63.48MB 100% | line3000
|
||||
63.48MB 100% | line3002 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3000 (inline)
|
||||
0 0% 98.46% 63.48MB 100% | line3002
|
||||
63.48MB 100% | line2000
|
||||
----------------------------------------------------------+-------------
|
2
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.seconds.text
generated
vendored
Normal file
2
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.seconds.text
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
Showing nodes accounting for 0, 0% of 0 total
|
||||
flat flat% sum% cum cum%
|
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text
generated
vendored
Normal file
5
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
Showing nodes accounting for 93.75MB, 95.05% of 98.63MB total
|
||||
Dropped 1 node (cum <= 4.93MB)
|
||||
flat flat% sum% cum cum%
|
||||
62.50MB 63.37% 63.37% 63.48MB 64.36% testdata/file2000.src
|
||||
31.25MB 31.68% 95.05% 98.63MB 100% testdata/file3000.src
|
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text.focus
generated
vendored
Normal file
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text.focus
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Active filters:
|
||||
focus=[12]00
|
||||
taghide=[X3]00
|
||||
Showing nodes accounting for 67.38MB, 68.32% of 98.63MB total
|
||||
flat flat% sum% cum cum%
|
||||
62.50MB 63.37% 63.37% 63.48MB 64.36% testdata/file2000.src
|
||||
4.88MB 4.95% 68.32% 4.88MB 4.95% testdata/file1000.src
|
||||
0 0% 68.32% 67.38MB 68.32% testdata/file3000.src
|
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_objects.text
generated
vendored
Normal file
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_objects.text
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Showing nodes accounting for 150, 100% of 150 total
|
||||
flat flat% sum% cum cum%
|
||||
80 53.33% 53.33% 130 86.67% line3002 (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 (inline)
|
||||
30 20.00% 100% 30 20.00% line1000
|
||||
0 0% 100% 50 33.33% line2000
|
||||
0 0% 100% 150 100% line3000
|
||||
0 0% 100% 110 73.33% line3001 (inline)
|
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_space.dot.focus
generated
vendored
Normal file
13
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_space.dot.focus
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l tagfocus=1mb:2gb\lShowing nodes accounting for 62.50MB, 63.37% of 98.63MB total\l"] }
|
||||
N1 [label="line2001\n62.50MB (63.37%)" id="node1" fontsize=24 shape=box tooltip="line2001 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN1_0 [label = "1.56MB" id="NN1_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N1 -> NN1_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N2 [label="line3000\n0 of 62.50MB (63.37%)" id="node2" fontsize=8 shape=box tooltip="line3000 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 [label="line2000\n0 of 62.50MB (63.37%)" id="node3" fontsize=8 shape=box tooltip="line2000 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line3002\n0 of 62.50MB (63.37%)" id="node4" fontsize=8 shape=box tooltip="line3002 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (62.50MB)" labeltooltip="line2000 -> line2001 (62.50MB)"]
|
||||
N2 -> N4 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N4 -> N3 [label=" 62.50MB" weight=64 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (62.50MB)" labeltooltip="line3002 -> line2000 (62.50MB)"]
|
||||
}
|
16
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_space.dot.focus.ignore
generated
vendored
Normal file
16
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.inuse_space.dot.focus.ignore
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l tagfocus=30kb:\l tagignore=1mb:2mb\lShowing nodes accounting for 36.13MB, 36.63% of 98.63MB total\lDropped 2 nodes (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 32.23MB (32.67%)" id="node1" fontsize=24 shape=box tooltip="line3002 (32.23MB)" color="#b23200" fillcolor="#eddcd5"]
|
||||
NN1_0 [label = "400kB" id="NN1_0" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N1 -> NN1_0 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N2 [label="line3000\n0 of 36.13MB (36.63%)" id="node2" fontsize=8 shape=box tooltip="line3000 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N3 [label="line3001\n0 of 36.13MB (36.63%)" id="node3" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 [label="line1000\n4.88MB (4.95%)" id="node4" fontsize=15 shape=box tooltip="line1000 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN4_0 [label = "200kB" id="NN4_0" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N4 -> NN4_0 [label=" 3.91MB" weight=100 tooltip="3.91MB" labeltooltip="3.91MB"]
|
||||
N2 -> N3 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N3 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
N3 -> N4 [label=" 3.91MB" weight=4 color="#b2a58f" tooltip="line3001 -> line1000 (3.91MB)" labeltooltip="line3001 -> line1000 (3.91MB)"]
|
||||
N1 -> N4 [label=" 0.98MB" color="#b2b0a9" tooltip="line3002 ... line1000 (0.98MB)" labeltooltip="line3002 ... line1000 (0.98MB)" style="dotted" minlen=2]
|
||||
}
|
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.lines.dot.focus
generated
vendored
Normal file
21
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.lines.dot.focus
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l focus=[12]00\lShowing nodes accounting for 67.38MB, 68.32% of 98.63MB total\l"] }
|
||||
N1 [label="line3000\nfile3000.src:4\n0 of 67.38MB (68.32%)" id="node1" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src:4 (67.38MB)" color="#b21300" fillcolor="#edd7d5"]
|
||||
N2 [label="line2001\nfile2000.src:2\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node2" fontsize=24 shape=box tooltip="line2001 testdata/file2000.src:2 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN2_0 [label = "1.56MB" id="NN2_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N2 -> NN2_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N3 [label="line1000\nfile1000.src:1\n4.88MB (4.95%)" id="node3" fontsize=13 shape=box tooltip="line1000 testdata/file1000.src:1 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN3_0 [label = "200kB" id="NN3_0" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N3 -> NN3_0 [label=" 3.91MB" weight=100 tooltip="3.91MB" labeltooltip="3.91MB"]
|
||||
N4 [label="line3002\nfile3000.src:3\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line3002 testdata/file3000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\nfile3000.src:2\n0 of 4.88MB (4.95%)" id="node5" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src:2 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
N6 [label="line2000\nfile2000.src:3\n0 of 63.48MB (64.36%)" id="node6" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N6 -> N2 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 testdata/file2000.src:3 -> line2001 testdata/file2000.src:2 (63.48MB)" labeltooltip="line2000 testdata/file2000.src:3 -> line2001 testdata/file2000.src:2 (63.48MB)"]
|
||||
N4 -> N6 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 testdata/file3000.src:3 -> line2000 testdata/file2000.src:3 (63.48MB)" labeltooltip="line3002 testdata/file3000.src:3 -> line2000 testdata/file2000.src:3 (63.48MB)"]
|
||||
N1 -> N4 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 testdata/file3000.src:4 -> line3002 testdata/file3000.src:3 (62.50MB)" labeltooltip="line3000 testdata/file3000.src:4 -> line3002 testdata/file3000.src:3 (62.50MB)"]
|
||||
N1 -> N5 [label=" 4.88MB\n (inline)" weight=5 color="#b2a086" tooltip="line3000 testdata/file3000.src:4 -> line3001 testdata/file3000.src:2 (4.88MB)" labeltooltip="line3000 testdata/file3000.src:4 -> line3001 testdata/file3000.src:2 (4.88MB)"]
|
||||
N5 -> N3 [label=" 3.91MB" weight=4 color="#b2a58f" tooltip="line3001 testdata/file3000.src:2 -> line1000 testdata/file1000.src:1 (3.91MB)" labeltooltip="line3001 testdata/file3000.src:2 -> line1000 testdata/file1000.src:1 (3.91MB)"]
|
||||
N2 -> N3 [label=" 0.98MB" color="#b2b0a9" tooltip="line2001 testdata/file2000.src:2 -> line1000 testdata/file1000.src:1 (0.98MB)" labeltooltip="line2001 testdata/file2000.src:2 -> line1000 testdata/file1000.src:1 (0.98MB)" minlen=2]
|
||||
N5 -> N4 [label=" 0.98MB\n (inline)" color="#b2b0a9" tooltip="line3001 testdata/file3000.src:2 -> line3002 testdata/file3000.src:3 (0.98MB)" labeltooltip="line3001 testdata/file3000.src:2 -> line3002 testdata/file3000.src:3 (0.98MB)"]
|
||||
}
|
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags
generated
vendored
Normal file
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
bytes: Total 98.6MB
|
||||
62.5MB (63.37%): 1.56MB
|
||||
31.2MB (31.68%): 400kB
|
||||
3.9MB ( 3.96%): 200kB
|
||||
1000.0kB ( 0.99%): 100kB
|
||||
|
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags.unit
generated
vendored
Normal file
6
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags.unit
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
bytes: Total 103424000.0B
|
||||
65536000.0B (63.37%): 1638400B
|
||||
32768000.0B (31.68%): 409600B
|
||||
4096000.0B ( 3.96%): 204800B
|
||||
1024000.0B ( 0.99%): 102400B
|
||||
|
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_objects.text
generated
vendored
Normal file
8
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_objects.text
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Showing nodes accounting for 150, 100% of 150 total
|
||||
flat flat% sum% cum cum%
|
||||
80 53.33% 53.33% 130 86.67% line3002 (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 (inline)
|
||||
30 20.00% 100% 30 20.00% line1000
|
||||
0 0% 100% 50 33.33% line2000
|
||||
0 0% 100% 150 100% line3000
|
||||
0 0% 100% 110 73.33% line3001 (inline)
|
14
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot
generated
vendored
Normal file
14
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lActive filters:\l tagshow=[2]00\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 94.73MB (96.04%)" id="node1" fontsize=20 shape=box tooltip="line3002 (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
N2 [label="line3000\n0 of 98.63MB (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node3" fontsize=24 shape=box tooltip="line2001 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line2000\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line2000 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\n0 of 36.13MB (36.63%)" id="node5" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (63.48MB)" labeltooltip="line2000 -> line2001 (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (63.48MB)" labeltooltip="line3002 -> line2000 (63.48MB)"]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
}
|
18
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot.focus
generated
vendored
Normal file
18
vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot.focus
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lActive filters:\l focus=[234]00\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 94.73MB (96.04%)" id="node1" fontsize=20 shape=box tooltip="line3002 (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "400kB" id="NN1_0" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N1 -> NN1_0 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N2 [label="line3000\n0 of 98.63MB (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node3" fontsize=24 shape=box tooltip="line2001 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN3_0 [label = "1.56MB" id="NN3_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N3 -> NN3_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N4 [label="line2000\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line2000 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\n0 of 36.13MB (36.63%)" id="node5" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (63.48MB)" labeltooltip="line2000 -> line2001 (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (63.48MB)" labeltooltip="line3002 -> line2000 (63.48MB)" minlen=2]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user