Compare commits

...

135 Commits

Author SHA1 Message Date
Dan Schaper
326cd6a1f8 Merge pull request #4665 from pi-hole/fix/touch_guard
Wrap touch calls with if/then guards for Buster docker.
2022-04-01 15:25:41 -07:00
Dan Schaper
b714c4598a Found it.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2022-04-01 14:49:30 -07:00
Dan Schaper
0f192998eb Create empty files.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2022-04-01 14:17:57 -07:00
Dan Schaper
8a5c7dec71 Ensure existing files are proper owner and mode.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
co-authored-by: RD WebDesign <github@rdwebdesign.com.br>
2022-04-01 14:08:09 -07:00
Dan Schaper
d45c9fc522 Final touch to install fix.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2022-04-01 11:08:26 -07:00
Dan Schaper
c2384ecc6f Change touch that would always fire to install.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2022-03-31 14:23:39 -07:00
Dan Schaper
2f38452565 Wrap touch calls with if/then guards for Buster docker.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2022-03-31 12:03:17 -07:00
Adam Warner
899cac0aac Ignore Documentation Needed label 2022-03-05 15:49:54 +00:00
Adam Warner
9be5199f7c remove the CONTENT_COMPARISON setting (defaults to false) 2022-02-20 12:39:58 +00:00
Adam Warner
6ffa2ba1b2 Merge pull request #4547 from pi-hole/development
Pi-hole Core v5.9
2022-02-12 20:04:20 +00:00
Adam Warner
e9250d62c5 Merge pull request #4598 from pi-hole/alt-4597
Use case insensitive deletion when removing custom CNAME/DNS records
2022-02-04 21:26:33 +00:00
Adam Warner
08999bf315 Use case insensitive deletion when removing custom CNAME/DNS records in case of manual entries with mixed case having been added
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-02-04 21:16:02 +00:00
Adam Warner
2bd670a3dd Merge pull request #4582 from lschloetterer/patch-1
add parameter to set filename for teleporter
2022-02-04 20:44:49 +00:00
Adam Warner
f342b2c9f6 Merge pull request #4489 from pi-hole/tweak/manpages
Remove pihole-FTL.conf manpage
2022-02-04 20:36:32 +00:00
Lukas Schlötterer
2a0bb5b9ee Create second entry for teleporter and adjust spacing
Signed-off-by: Lukas Schlötterer <80917404+lschloetterer@users.noreply.github.com>
2022-02-04 21:29:23 +01:00
yubiuser
c3c5342b48 Fix reviewer's comment
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2022-02-04 21:11:54 +01:00
Christian König
d7d8e9730b Remove pihole-FTL.conf.5 from automated tests
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-02-04 20:43:47 +01:00
Christian König
7c60ee8df1 Remove pihole-FTL.conf man page
Signed-off-by: Christian König <ckoenig@posteo.de>

Remove double https://

Signed-off-by: Christian König <ckoenig@posteo.de>
2022-02-04 20:43:47 +01:00
Adam Warner
ee9f4856a2 Merge pull request #4596 from pi-hole/long-live-centos8-stream
Switch from centos8 to centos8:stream base image for centos 8 tests
2022-02-03 19:05:16 +00:00
Adam Warner
444526ad58 Switch from centos8 to centos8:stream base image for centos 8 tests 2022-02-03 18:43:19 +00:00
DL6ER
844c4dcdc8 Merge pull request #4584 from pi-hole/fix/gravity_internal_sqlite3
Replace calls to sqlite3 by calls to pihole-FTL sqlite3
2022-02-03 05:45:04 +01:00
Lukas Schlötterer
881d92632c add hint for custom teleporter filename to help function
Signed-off-by: Lukas Schlötterer <80917404+lschloetterer@users.noreply.github.com>
2022-02-01 09:41:57 +01:00
DL6ER
76d4e1209f Merge pull request #4585 from pi-hole/tweak/sed-add-if-not-exists
Replace value for BLOCKING_ENABLED, add if it does not already exist
2022-02-01 07:45:34 +01:00
DL6ER
d956498c8c Merge pull request #4575 from pi-hole/fix/tag_update
Fix updating based on tags on older git versions by doing a full fetch
2022-02-01 07:44:23 +01:00
DL6ER
e09dd56807 Remove RPM package sqlite as well
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-02-01 07:38:57 +01:00
DL6ER
30ec1c94cc Merge pull request #4593 from pi-hole/master
sync: master to development
2022-02-01 07:37:47 +01:00
Adam Warner
5d68dac90e Merge pull request #4588 from pi-hole/stale
Fix stale label to stale
2022-01-31 19:25:28 +00:00
Adam Warner
77e5121d43 Split new function out into a separte utility script and add a test for it. Can be used in future to organise re/commonly-used code
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-30 23:05:28 +00:00
DL6ER
74d7d10554 Orphans need to be deleted in the old database
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-30 21:09:24 +01:00
Christian König
2f4c4d9176 Fix stale label to stale
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-30 20:13:10 +01:00
Adam Warner
1dd9d55d82 Replace value for BLOCKING_ENABLED (and QUERY_LOGGING, for consistency) and if value that we are trying to replace does not exist, add it to the end of the file.
Co-authored-by: MichaIng <micha@dietpi.com>
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-30 15:53:03 +00:00
DL6ER
8cbffa179d Replace remaining sqlite3 calls by calls to our embedded pihole-FTL sqlite3 engine and remove sqlite3 as dependency in the installer.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-30 11:18:17 +01:00
DL6ER
5bb79de70b Clean possible leftovers in domainlist_by_group, adlist_by_group, and client_by_group before copying from database base to avoid foreign key violations.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-30 10:38:24 +01:00
DL6ER
534f9a63bf Copy database tables earlier into the new gravity database to avoid foreign key contraint violations when adding gravity entries refering to an empty adlist table
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-30 10:36:20 +01:00
DL6ER
f0f5cc52d9 Use internal SQLite3 engine in more places in gravity.sh
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-29 22:39:45 +01:00
Lukas Schlötterer
bad6d8a59e add parameter to set filename for teleporter
Make it possible to write pihole -a -t myname.tar.gz to configure the filename however you want

Signed-off-by: Lukas Schlötterer <80917404+lschloetterer@users.noreply.github.com>
2022-01-28 16:26:57 +01:00
Christian König
7aa28e4a3a Do a full fetch
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-22 22:09:15 +01:00
Adam Warner
e80a7731c9 Merge pull request #4568 from pi-hole/master
sync: master to development
2022-01-16 16:26:50 +00:00
Adam Warner
3cd662eaeb Merge pull request #4558 from pi-hole/stale
Change the exemption issue label pinned to internal for stale issues
2022-01-16 14:59:17 +00:00
RD WebDesign
6ead24b315 Move space into variable (#4562)
Signed-off-by: rdwebdesign <github@rdwebdesign.com.br>
2022-01-14 17:00:34 +01:00
Christian König
cdde832ed3 Some use uppercase some don't...
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-13 09:16:31 +01:00
Christian König
57ba60ce54 Change the exemption issue label pinned to internal for stale issues
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-13 09:13:40 +01:00
Lukas Schlötterer
ed6b85241b use sed substitute instead of delete and append (#4555)
* use sed substitute instead of delete and append

doesn't move the line to the end of the file, instead keeps the order of the lines in setupVars.conf intact

Signed-off-by: Lukas Schlötterer <80917404+lschloetterer@users.noreply.github.com>

* Match start of line

as suggested in the review

Signed-off-by: Lukas Schlötterer <80917404+lschloetterer@users.noreply.github.com>

Co-authored-by: yubiuser <ckoenig@posteo.de>

Co-authored-by: yubiuser <ckoenig@posteo.de>
2022-01-12 09:23:13 +01:00
Adam Warner
918f7a504c Merge pull request #4554 from pi-hole/master
sync: master to development
2022-01-11 19:20:18 +00:00
Adam Warner
3260cb40b5 ops per run -> 300 for stale
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-11 19:17:29 +00:00
Adam Warner
a79c1159a9 Merge pull request #4550 from pi-hole/master
sync: master to development
2022-01-11 09:11:51 +00:00
Adam Warner
65a04246cd Merge pull request #4548 from pi-hole/actions/sync-to-dev
[Maintenance] Sync Master back to Dev when code is pushed to master
2022-01-11 09:10:59 +00:00
Adam Warner
f1245685dc Add action to automatically sync master to dev when code is pushed to master
Add in a release.yml to ignore github-actions author  when auto-generating release notes

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-11 08:53:35 +00:00
DL6ER
ec3a5c2989 Merge pull request #4543 from pi-hole/tweak/debug_ipaddr
Include ip addr show and ip route show in debug log
2022-01-09 12:53:37 +01:00
DL6ER
b20b38d44f Include ip addr show and ip route show for us to help with local-service issues (where hops-away is measured)
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-09 12:38:22 +01:00
DL6ER
d5253f26f4 Merge pull request #4542 from pi-hole/remove_oneline
Remove oneline from ss call
2022-01-09 11:39:33 +01:00
Christian König
a65a841c56 Remove oneline from ss call
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-09 07:13:51 +01:00
Adam Warner
1b0b24daf5 Merge pull request #4539 from pi-hole/master
Sync Master -> Dev
2022-01-08 22:35:46 +00:00
Adam Warner
7010ed454c Merge pull request #4532 from MichaIng/patch-1
Install netcat-openbsd as dependency explicitly
2022-01-08 15:17:01 +00:00
DL6ER
ce86157067 Fix gravity in case there are no adlists at all or all are disabled (#4535)
Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-08 14:15:26 +01:00
Adam Warner
3097c8fbdc Skip the required ports check if installed in docker container. Unpriv'ed containers do not have access to the information required to resolve the service name listening - and the container should not start if there was a port conflict anyway (#4536)
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-08 13:57:49 +01:00
Adam Warner
363e2f10bb Merge pull request #4534 from pi-hole/meta/enable_stale
Enable Stale Action for live use
2022-01-08 11:42:48 +00:00
Dan Schaper
bfd9fe80ef Remove debug from Stale
Put Stale in to action.
2022-01-08 01:42:35 -08:00
MichaIng
c2080324b7 Install netcat-openbsd as dependency explicitly
Since Debian Stretch and Ubuntu Bionic, the "netcat" package is a transitional dummy package which pulls in "netcat-traditional" on Debian Stretch+Buster and Ubuntu Bionic, and "netcat-openbsd" on Debian Bullseye, Ubuntu Focal and up.

On Debian Bookworm (testing), however, the "netcat" package has been removed during the last 3 days at time or writing, so that it fails do be installed. While "netcat-traditional" and "netcat-openbsd" both "Provides: netcat", since it's two alternatives, APT does not automatically pick one but aborts, and the only solution is to install one explicitly.

While this is likely a temporary state of the Debian testing suite, having a closer look at the two alternatives shows that "netcat-openbsd" is a much more actively maintained newer version with additional support for IPv6, proxies, and UNIX sockets, which is likely the reason for the gradual transition via meta package from "netcat-traditional" to "netcat-openbsd". This commit hence consequently follows this aim by skipping the transitional dummy package and installing "netcat-openbsd" explicitly as dependency, to avoid any possible errors like the one which occurs currently on Bookworm.

Both packages can be installed concurrently and do no conflict, but are managed via dpkg's "update-alternatives".

For reference:
- https://packages.debian.org/netcat
- https://packages.ubuntu.com/netcat

Signed-off-by: MichaIng <micha@dietpi.com>
2022-01-07 18:55:15 +01:00
Adam Warner
875ad04fde Merge pull request #4522 from pi-hole/development
v5.8.1
2022-01-05 23:00:01 +00:00
Adam Warner
0124e491d0 Merge pull request #4521 from pi-hole/fix/chronometer
Fix/chronometer
2022-01-05 22:51:43 +00:00
Christian König
81698ef1ed Fix Pi-hole status in chronometer
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-05 21:09:57 +01:00
Adam Warner
2ff10fcd0a Merge pull request #4514 from pi-hole/development
Pi-hole core v5.8
2022-01-05 18:24:21 +00:00
DL6ER
5823f5e254 Use ss instead of lsof (#4518)
* Use ss instead of lsof for pihole status checks

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Use ss FILTER instead of piping into bash

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Use ss in debug log generation

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Remove lsof from dependencies

Signed-off-by: DL6ER <dl6er@dl6er.de>
2022-01-05 16:41:46 +00:00
Adam Warner
7807a93e10 If PIHOLE_DOCKER_TAG is set, then include that info in the debug run (#4515)
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2022-01-04 21:46:06 +00:00
yubiuser
c6a2a6f739 Install pihole-FTL.conf template on fresh installation (#4496)
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-04 19:09:30 +00:00
yubiuser
241e53ed45 Skip debug upload question if called from web interface (#4494)
* Skip debug upload question if called from web interface

Signed-off-by: Christian König <ckoenig@posteo.de>

* Suppress upload error if users opt-out from uploading from web interface

Signed-off-by: Christian König <ckoenig@posteo.de>

* Fix and reverse logic

Signed-off-by: Christian König <ckoenig@posteo.de>

* Remove addtional space

Signed-off-by: Christian König <ckoenig@posteo.de>

* Include reviewer's comment :D

Co-authored-by: Adam Warner <me@adamwarner.co.uk>

Co-authored-by: Adam Warner <me@adamwarner.co.uk>
2022-01-04 19:06:41 +00:00
Adam Warner
d605b4b8f9 Merge pull request #4513 from pi-hole/master
master->development
2022-01-04 16:57:33 +00:00
yubiuser
0e359a6321 Set dnsmasq interface listening by default to local (#4509)
Signed-off-by: Christian König <ckoenig@posteo.de>
2022-01-04 09:40:07 +01:00
WaLLy3K
5bd7cc9c9d Replace which with command -v (#4499)
Signed-off-by: WaLLy3K WaLLy3K@users.noreply.github.com
2022-01-01 18:02:20 +00:00
DL6ER
886f0c7df3 Merge pull request #4485 from pi-hole/tweak/web_status
Return the port FTL is listening on in pihole status function
2021-12-29 11:13:12 +01:00
Christian König
3989cc19e9 Remove double text output
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 19:55:42 +01:00
Christian König
bcb59159ed Analyse port also on ports other than 53
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 19:52:11 +01:00
Christian König
2b52f92647 Inlcude port also in cli output
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 19:36:32 +01:00
Matthew Nickson
71ed842dfd Fixed path to 404 file when using custom.php (#4488)
Signed-off-by: Computroniks <mnickson@sidingsmedia.com>
2021-12-28 19:32:06 +01:00
Christian König
f45248df80 Use FTL's new dns-port API endpoint
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 13:42:19 +01:00
Christian König
5729f64ddc Fix missing fi
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 12:21:31 +01:00
Christian König
2a869419b4 Add netcat to dependencies
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-28 12:18:39 +01:00
yubiuser
4a2f4c1bce Fix indention_2
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-28 12:11:46 +01:00
yubiuser
5ef731fc57 Fix indention
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-28 12:11:26 +01:00
Christian König
71ebd64f4e mend
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-26 18:13:14 +01:00
Christian König
9f0e0dbd37 Fix analyse ports
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-26 18:10:36 +01:00
Christian König
ef30a85afb Include port in status function
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-26 17:10:48 +01:00
Adam Warner
1b809e4e8e Merge pull request #4480 from pi-hole/development
Pi-hole Core v5.7
2021-12-22 20:24:59 +00:00
DL6ER
3d3bb45a46 Merge pull request #4288 from pi-hole/new/gravity_repair
Implement fully-automated gravity database recovery method
2021-12-22 21:08:01 +01:00
DL6ER
d2a98ae954 Document -r recover force case
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-22 19:53:52 +01:00
DL6ER
2e1ce7fc87 Apply suggestions from code review
Co-authored-by: yubiuser <ckoenig@posteo.de>
2021-12-22 19:52:08 +01:00
yubiuser
920cf6de14 Check for updates on master based on tags not commits (#4475)
* Check for updates on master based on tags not commits

Signed-off-by: Christian König <ckoenig@posteo.de>

* Fix stickler

Signed-off-by: Christian König <ckoenig@posteo.de>

* Address reviewer's comments

Signed-off-by: Christian König <ckoenig@posteo.de>

* Fix stickler again

Signed-off-by: Christian König <ckoenig@posteo.de>

* Use local git instead of relying on github

Signed-off-by: Christian König <ckoenig@posteo.de>

* Add --tags

Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>

Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-22 18:21:44 +00:00
DL6ER
1eb31174a5 Merge pull request #4455 from pi-hole/comment
Add comment help text to list function
2021-12-21 22:26:05 +01:00
yubiuser
ff4487ff74 Escape quotes
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-21 22:10:56 +01:00
DL6ER
54c58327f1 Merge pull request #4450 from pi-hole/unblock_NODATA
Unblock adlist domain during gravity run in NODATA mode
2021-12-21 22:08:14 +01:00
yubiuser
db5e94b14a use +short and omit obsolet awk
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-21 22:01:34 +01:00
DL6ER
7167e6d5e4 Apply suggestions from code review
Co-authored-by: Dan Schaper <dan.schaper@pi-hole.net>
2021-12-21 16:20:02 +01:00
yubiuser
39a66b608b Replace Contributing Guide by link to docs.pi-hole.net (#4433)
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-21 14:00:47 +01:00
DL6ER
b06efb6ab7 Declare variables local
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-21 14:00:46 +01:00
DL6ER
ab4bce4787 Allow users to force recovery even when checks are okay using "pihole -g -r recover force"
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-21 13:57:03 +01:00
DL6ER
469c179b32 Return early from recovery routine when integrity checks didn't show any database errors.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-21 13:57:03 +01:00
DL6ER
190ab79606 Implement fully-automated gravity database recovery method.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-21 13:57:03 +01:00
yubiuser
669f1b0f4a Address reviewer's comment
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2021-12-21 12:58:39 +01:00
DL6ER
31de661bbb Merge pull request #4414 from pi-hole/debug/custom.list
Add custom.list (Local DNS Records) to debug log
2021-12-21 12:37:11 +01:00
DL6ER
3a67d1cf8d Merge pull request #4461 from pi-hole/qr_iframe
Companion to pi-hole/adminlte #1996
2021-12-20 21:51:05 +01:00
DL6ER
c0f454ddfa Add new interface listening option "bind" (#4476)
Signed-off-by: DL6ER <dl6er@dl6er.de>
2021-12-20 21:36:19 +01:00
DL6ER
ef0a22f9ec Merge pull request #4478 from pi-hole/fix/db_permission
Gravity database handling improvements
2021-12-20 21:28:09 +01:00
Dan Schaper
533a77d6d5 Add database function failure guards.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2021-12-20 11:36:55 -08:00
Dan Schaper
76ae75689c Check for DNS before run.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2021-12-20 11:09:11 -08:00
Dan Schaper
a780fc59e2 Set DBFile permissions on creation.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2021-12-20 10:56:42 -08:00
Christian König
28085cf7d8 Merge iFrame exceptions
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-17 10:08:16 +01:00
Dan Schaper
a3cc5df317 Configure stale action (#4269)
* Configure stale action

* [skip ci] Update .github/workflows/stale.yml

* Update .github/workflows/stale.yml
2021-12-16 20:19:11 +01:00
Christian König
2eff53b2bb Allow qr code iframe
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-10 07:17:53 +01:00
Christian König
8d6ce78c65 Allow qr code iframe
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-10 07:09:42 +01:00
Christian König
b52a3a021d Add comment help text to list function
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-06 20:30:37 +01:00
yubiuser
ae39e338fe Use exec to run gravity script (#4449)
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-04 10:50:21 +01:00
Christian König
e243c562c2 Unblock adlist domain during gravity run in NODATA mode
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-03 09:17:19 +01:00
DL6ER
4c267f7732 Merge pull request #4445 from pi-hole/fix/counting
Fix number of invalid domains
2021-12-03 08:56:54 +01:00
Christian König
647ba6ec9d Rename variables to improve comprehensibility
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-02 23:13:01 +01:00
Subhaditya Nath
ba6d700e7e Fix pihole -v output if WebAdmin not installed (#4370)
* Fix https://github.com/pi-hole/pi-hole/issues/4279

Signed-off-by: Subhaditya Nath <sn03.general@gmail.com>

* Don't ignore exit code of version.sh

If it exits with a non-zero return code, that means some error occurred,
and so it shouldn't be ignored.

Signed-off-by: Subhaditya Nath <sn03.general@gmail.com>

* Implement changes suggested by @Michalng

Signed-off-by: Subhaditya Nath <sn03.general@gmail.com>

* Implement changes suggested by @PromoFaux

Signed-off-by: Subhaditya Nath <sn03.general@gmail.com>

* Always source /etc/pihole/setupVars.conf

https://github.com/pi-hole/pi-hole/pull/4370#issuecomment-978149567

Co-authored-by: Adam Warner <me@adamwarner.co.uk>
2021-12-02 20:46:11 +00:00
Chiller Dragon
e485a7b9bb Some shellchecks in basic-install.sh (#4088)
* Some shellchecks in basic-install.sh

Signed-off-by: ChillerDragon <ChillerDragon@gmail.com>

* Use more explicit grep (thanks to @MichaIng)

Signed-off-by: ChillerDragon <ChillerDragon@gmail.com>
2021-12-02 14:44:50 +01:00
Christian König
bfda52ed79 Fix number of invalid domains
Co-authored-by: abesnier <besnier_antoine@yahoo.fr>
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-12-01 11:07:17 +01:00
yubiuser
941f90d5c1 Merge pull request #4443 from MichaIng/patch-1
Fix generated /etc/os-release file in OS check test
2021-11-30 13:05:34 +01:00
MichaIng
14a379d448 Fix generated /etc/os-release file in OS check test
Signed-off-by: MichaIng <micha@dietpi.com>
2021-11-30 02:57:44 +01:00
yubiuser
671fcaffc3 Merge pull request #4085 from jbzdarkid/patch-3
Clean up bash script formatting
2021-11-26 09:23:31 +01:00
jbzdarkid
bc8150adfa Clean up bash script formatting
Done with the help of beautysh (a python-based bash formatter)

Signed-off-by: jbzdarkid <jbzdarkid@gmail.com>
2021-11-25 14:12:09 -08:00
yubiuser
b750b01acc Merge pull request #4434 from MichaIng/patch-1
Use a fixed list height for network interface selection
2021-11-22 16:37:58 +01:00
MichaIng
996a2c74fa Use a fixed list height for network interface selection
This solves the issue reported here: https://github.com/pi-hole/pi-hole/issues/4196
It replaces the other suggested solution here: https://github.com/pi-hole/pi-hole/pull/4197

The benefit of using a fixed/limited list height, compared to allowing larger whiptail/dialogue dimension, is that it works on small screens as well, where the screen or console size itself is too small to hold the interface list + text above + whiptail frame.

It the amount of list elements exceeds the defined list height, automatically a visual scroll bar is added and the list can be scrolled with up/down pageup/pagedown buttons, hence it is generally not required to adjust a list height based on the amount of elements. The fixed height of "6" is chosen since all other "--radiolist" calls use this fixed height as well, it fits and looks good within a 20 rows high whiptail dialogue, and in the common Pi-hole use cases there are no more than 6 network interfaces.

Signed-off-by: MichaIng <micha@dietpi.com>
2021-11-22 13:25:13 +01:00
DL6ER
d85fee27a9 Merge pull request #4420 from pi-hole/clean/webpage.sh
Remove unused code from webpage.sh
2021-11-20 21:13:50 +01:00
Adam Warner
cdd4d9ea9e Update the tests (#4427)
* unpin the requirements and update all to latest available - needs more work still. see notes in `def host()`

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

* fix py3 monkey patch of testinfra docker using bash

Signed-off-by: Adam Hill <adam@diginc.us>

* update the other test files to use `host` instead of `Pihole`
Address some sticklr and codefactor
update python version from 3.7 to 3.8
preload `git` onto the centos/fedora test images, and switch which with command -v in the passthrough mock
testinfra is deprecated, use pytest-testinfra

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Co-authored-by: Adam Hill <adam@diginc.us>
2021-11-18 01:03:37 +00:00
pvogt09
cedd1a2591 unit test for umask problems in #3177 and #2730 (#3191)
* add test for file permissions of $webroot

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* changes sudo to su for running command as user www-data

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* installs PIHOLE_WEB_DEPS to create LIGHTTPD_USER

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* changes stdout to rc

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* use installPihole instead of installPiholeWeb in test

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* try installation process with main

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* mock systemctl

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* removes stickler errors

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* start lighttpd and make webpage test optional

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* test all files and directories in $webroot

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* fix stickler and codefactor warnings

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* set permission for /var/cache if it did not exist before

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* add test case for pihole files

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* fix stickler errors

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* revert "set permission for /var/cache if it did not exist before" and make lighttpd start work

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* add --add-cap=NET_ADMIN to enable FTL start

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* specify DNS server for cURL

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* check files created by FTL

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* reorder code and change nameserver in /etc/resolv.conf

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* resolve with dig instead of relying on /etc/resolv.conf

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* set IP to 127.0.0.1 in setupVars.conf for blockpage tests

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* resolve domain with dig and remove debug output

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* fix stickler errors

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* no git pull in Github Action runs for pull requests

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* --cap-add=ALL test

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* fix stickler errors

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* remove debug code

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* update_repo patch for CentOS 7 in Github Actions

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* removes TODOs and stickler warnings

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* adds trailing slash to domain

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* use only first result from dig

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* domain name resolution does not work reliably in docker container

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* repair executable permission

Signed-off-by: pvogt09 <50047961+pvogt09@users.noreply.github.com>

* Create mock_command_passthrough that allows intercepting of specific arguments - everything else is passed through to the proper command. Use this new command instead of making changes in basic-install.sh to make the tests pass.

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Co-authored-by: Adam Warner <me@adamwarner.co.uk>
2021-11-11 16:44:57 +00:00
yubiuser
ac4a975be5 Allow users to skip setting static IP adress (#4419)
* Allow users to skip setting static IP adresss

Signed-off-by: Christian König <ckoenig@posteo.de>
2021-11-06 20:32:03 +00:00
yubiuser
996f8fff28 Recommend apt instead of apt-get if updating the package cache failed (#4421)
* Only change the recommendation to use apt

Signed-off-by: Christian König <ckoenig@posteo.de>
2021-11-04 15:55:16 -07:00
Christian König
e733553295 Remove unused code from webpage.sh
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-11-02 21:22:14 +01:00
yubiuser
0c4e1b51ab Merge pull request #4417 from aviddiviner/fix-rfc-config-docs
Fix documentation; add some missing zones
2021-10-28 20:13:42 +02:00
David Irvine
c6da1a3918 Fix documentation; add some missing zones
Signed-off-by: David Irvine <aviddiviner@gmail.com>
2021-10-28 12:09:34 +02:00
Christian König
c1eb35a35e Add custom.list (Local DNS Records) to debug log
Signed-off-by: Christian König <ckoenig@posteo.de>
2021-10-26 22:46:52 +02:00
Adam Warner
b5e0f142cc Merge pull request #4405 from pi-hole/development
Pi-hole v5.6
2021-10-23 20:01:27 +01:00
Adam Warner
dad6247cb0 Merge pull request #4347 from pi-hole/development
Pi-hole core v5.5
2021-09-29 21:45:58 +01:00
54 changed files with 2279 additions and 1852 deletions

7
.github/release.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
changelog:
exclude:
labels:
- internal
authors:
- dependabot
- github-actions

25
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Mark stale issues
on:
schedule:
- cron: '0 * * * *'
workflow_dispatch:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: actions/stale@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30
days-before-close: 5
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Please comment or update this issue or it will be closed in 5 days.'
stale-issue-label: 'stale'
exempt-issue-labels: 'Internal, Fixed in next release, Bug: Confirmed, Documentation Needed'
exempt-all-issue-assignees: true
operations-per-run: 300

27
.github/workflows/sync-back-to-dev.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Sync Back to Development
on:
push:
branches:
- master
jobs:
sync-branches:
runs-on: ubuntu-latest
name: Syncing branches
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Opening pull request
id: pull
uses: tretuna/sync-branches@1.4.0
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FROM_BRANCH: 'master'
TO_BRANCH: 'development'
- name: Label the pull request to ignore for release note generation
uses: actions-ecosystem/action-add-labels@v1
with:
labels: internal
repo: ${{ github.repository }}
number: ${{ steps.pull.outputs.PULL_REQUEST_NUMBER }}

View File

@@ -36,10 +36,10 @@ jobs:
name: Checkout repository name: Checkout repository
uses: actions/checkout@v2 uses: actions/checkout@v2
- -
name: Set up Python 3.7 name: Set up Python 3.8
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: 3.7 python-version: 3.8
- -
name: Install dependencies name: Install dependencies
run: pip install -r test/requirements.txt run: pip install -r test/requirements.txt

1
.gitignore vendored
View File

@@ -9,3 +9,4 @@ __pycache__
*.egg-info *.egg-info
.idea/ .idea/
*.iml *.iml
.vscode/

View File

@@ -2,111 +2,6 @@
Please read and understand the contribution guide before creating an issue or pull request. Please read and understand the contribution guide before creating an issue or pull request.
## Etiquette The guide can be found here: [https://docs.pi-hole.net/guides/github/contributing/](https://docs.pi-hole.net/guides/github/contributing/)
- Our goal for Pi-hole is **stability before features**. This means we focus on squashing critical bugs before adding new features. Often, we can do both in tandem, but bugs will take priority over a new feature.
- Pi-hole is open source and [powered by donations](https://pi-hole.net/donate/), and as such, we give our **free time** to build, maintain, and **provide user support** for this project. It would be extremely unfair for us to suffer abuse or anger for our hard work, so please take a moment to consider that.
- Please be considerate towards the developers and other users when raising issues or presenting pull requests.
- Respect our decision(s), and do not be upset or abusive if your submission is not used.
## Viability
When requesting or submitting new features, first consider whether it might be useful to others. Open source projects are used by many people, who may have entirely different needs to your own. Think about whether or not your feature is likely to be used by other users of the project.
## Procedure
**Before filing an issue:**
- Attempt to replicate and **document** the problem, to ensure that it wasn't a coincidental incident.
- Check to make sure your feature suggestion isn't already present within the project.
- Check the pull requests tab to ensure that the bug doesn't have a fix in progress.
- Check the pull requests tab to ensure that the feature isn't already in progress.
**Before submitting a pull request:**
- Check the codebase to ensure that your feature doesn't already exist.
- Check the pull requests to ensure that another person hasn't already submitted the feature or fix.
- Read and understand the [DCO guidelines](https://docs.pi-hole.net/guides/github/contributing/) for the project.
## Technical Requirements
- Submit Pull Requests to the **development branch only**.
- Before Submitting your Pull Request, merge `development` with your new branch and fix any conflicts. (Make sure you don't break anything in development!)
- Please use the [Google Style Guide for Shell](https://google.github.io/styleguide/shell.xml) for your code submission styles.
- Commit Unix line endings.
- Please use the Pi-hole brand: **Pi-hole** (Take a special look at the capitalized 'P' and a low 'h' with a hyphen)
- (Optional fun) keep to the theme of Star Trek/black holes/gravity.
## Forking and Cloning from GitHub to GitHub
1. Fork <https://github.com/pi-hole/pi-hole/> to a repo under a namespace you control, or have permission to use, for example: `https://github.com/<your_namespace>/<your_repo_name>/`. You can do this from the github.com website.
2. Clone `https://github.com/<your_namespace>/<your_repo_name>/` with the tool of you choice.
3. To keep your fork in sync with our repo, add an upstream remote for pi-hole/pi-hole to your repo.
```bash
git remote add upstream https://github.com/pi-hole/pi-hole.git
```
4. Checkout the `development` branch from your fork `https://github.com/<your_namespace>/<your_repo_name>/`.
5. Create a topic/branch, based on the `development` branch code. *Bonus fun to keep to the theme of Star Trek/black holes/gravity.*
6. Make your changes and commit to your topic branch in your repo.
7. Rebase your commits and squash any insignificant commits. See the notes below for an example.
8. Merge `development` your branch and fix any conflicts.
9. Open a Pull Request to merge your topic branch into our repo's `development` branch.
- Keep in mind the technical requirements from above.
## Forking and Cloning from GitHub to other code hosting sites
- Forking is a GitHub concept and cannot be done from GitHub to other git-based code hosting sites. However, those sites may be able to mirror a GitHub repo.
1. To contribute from another code hosting site, you must first complete the steps above to fork our repo to a GitHub namespace you have permission to use, for example: `https://github.com/<your_namespace>/<your_repo_name>/`.
2. Create a repo in your code hosting site, for example: `https://gitlab.com/<your_namespace>/<your_repo_name>/`
3. Follow the instructions from your code hosting site to create a mirror between `https://github.com/<your_namespace>/<your_repo_name>/` and `https://gitlab.com/<your_namespace>/<your_repo_name>/`.
4. When you are ready to create a Pull Request (PR), follow the steps `(starting at step #6)` from [Forking and Cloning from GitHub to GitHub](#forking-and-cloning-from-github-to-github) and create the PR from `https://github.com/<your_namespace>/<your_repo_name>/`.
## Notes for squashing commits with rebase
- To rebase your commits and squash previous commits, you can use:
```bash
git rebase -i your_topic_branch~(number of commits to combine)
```
- For more details visit [gitready.com](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html)
1. The following would combine the last four commits in the branch `mytopic`.
```bash
git rebase -i mytopic~4
```
2. An editor window opens with the most recent commits indicated: (edit the commands to the left of the commit ID)
```gitattributes
pick 9dff55b2 existing commit comments
squash ebb1a730 existing commit comments
squash 07cc5b50 existing commit comments
reword 9dff55b2 existing commit comments
```
3. Save and close the editor. The next editor window opens: (edit the new commit message). *If you select reword for a commit, an additional editor window will open for you to edit the comment.*
```bash
new commit comments
Signed-off-by: yourname <your email address>
```
4. Save and close the editor for the rebase process to execute. The terminal output should say something like the following:
```bash
Successfully rebased and updated refs/heads/mytopic.
```
5. Once you have a successful rebase, and before you sync your local clone, you have to force push origin to update your repo:
```bash
git push -f origin
```
6. Continue on from step #7 from [Forking and Cloning from GitHub to GitHub](#forking-and-cloning-from-github-to-github)

View File

@@ -25,11 +25,12 @@ server=/localhost/
server=/invalid/ server=/invalid/
# The same RFC requests something similar for # The same RFC requests something similar for
# 16.172.in-addr.arpa. 22.172.in-addr.arpa. 27.172.in-addr.arpa. # 10.in-addr.arpa. 21.172.in-addr.arpa. 27.172.in-addr.arpa.
# 17.172.in-addr.arpa. 30.172.in-addr.arpa. 28.172.in-addr.arpa. # 16.172.in-addr.arpa. 22.172.in-addr.arpa. 28.172.in-addr.arpa.
# 18.172.in-addr.arpa. 23.172.in-addr.arpa. 29.172.in-addr.arpa. # 17.172.in-addr.arpa. 23.172.in-addr.arpa. 29.172.in-addr.arpa.
# 19.172.in-addr.arpa. 24.172.in-addr.arpa. 31.172.in-addr.arpa. # 18.172.in-addr.arpa. 24.172.in-addr.arpa. 30.172.in-addr.arpa.
# 20.172.in-addr.arpa. 25.172.in-addr.arpa. 168.192.in-addr.arpa. # 19.172.in-addr.arpa. 25.172.in-addr.arpa. 31.172.in-addr.arpa.
# 20.172.in-addr.arpa. 26.172.in-addr.arpa. 168.192.in-addr.arpa.
# Pi-hole implements this via the dnsmasq option "bogus-priv" (see # Pi-hole implements this via the dnsmasq option "bogus-priv" (see
# 01-pihole.conf) because this also covers IPv6. # 01-pihole.conf) because this also covers IPv6.

View File

@@ -329,8 +329,8 @@ get_sys_stats() {
*) cpu_col="$COL_URG_RED";; *) cpu_col="$COL_URG_RED";;
esac esac
# $COL_NC$COL_DARK_GRAY is needed for $COL_URG_RED # $COL_NC$COL_DARK_GRAY is needed for $COL_URG_RED
cpu_temp_str=" @ $cpu_col$cpu_temp$COL_NC$COL_DARK_GRAY" cpu_temp_str=" @ $cpu_col$cpu_temp$COL_NC$COL_DARK_GRAY"
elif [[ "$temp_unit" == "F" ]]; then elif [[ "$temp_unit" == "F" ]]; then
cpu_temp=$(printf "%.0ff\\n" "$(calcFunc "($(< $temp_file) / 1000) * 9 / 5 + 32")") cpu_temp=$(printf "%.0ff\\n" "$(calcFunc "($(< $temp_file) / 1000) * 9 / 5 + 32")")
@@ -357,7 +357,7 @@ get_sys_stats() {
ram_used="${ram_raw[1]}" ram_used="${ram_raw[1]}"
ram_total="${ram_raw[2]}" ram_total="${ram_raw[2]}"
if [[ "$(pihole status web 2> /dev/null)" == "1" ]]; then if [[ "$(pihole status web 2> /dev/null)" -ge "1" ]]; then
ph_status="${COL_LIGHT_GREEN}Active" ph_status="${COL_LIGHT_GREEN}Active"
else else
ph_status="${COL_LIGHT_RED}Offline" ph_status="${COL_LIGHT_RED}Offline"
@@ -445,7 +445,7 @@ get_strings() {
lan_info="Gateway: $net_gateway" lan_info="Gateway: $net_gateway"
dhcp_info="$leased_str$ph_dhcp_num of $ph_dhcp_max" dhcp_info="$leased_str$ph_dhcp_num of $ph_dhcp_max"
ads_info="$total_str$ads_blocked_today of $dns_queries_today" ads_info="$total_str$ads_blocked_today of $dns_queries_today"
dns_info="$dns_count DNS servers" dns_info="$dns_count DNS servers"
[[ "$recent_blocked" == "0" ]] && recent_blocked="${COL_LIGHT_RED}FTL offline${COL_NC}" [[ "$recent_blocked" == "0" ]] && recent_blocked="${COL_LIGHT_RED}FTL offline${COL_NC}"
@@ -488,7 +488,7 @@ chronoFunc() {
${COL_LIGHT_RED}Press Ctrl-C to exit${COL_NC} ${COL_LIGHT_RED}Press Ctrl-C to exit${COL_NC}
${COL_DARK_GRAY}$scr_line_str${COL_NC}" ${COL_DARK_GRAY}$scr_line_str${COL_NC}"
else else
echo -e "|¯¯¯(¯)_|¯|_ ___|¯|___$phc_ver_str\\n| ¯_/¯|_| ' \\/ _ \\ / -_)$lte_ver_str\\n|_| |_| |_||_\\___/_\\___|$ftl_ver_str\\n ${COL_DARK_GRAY}$scr_line_str${COL_NC}" echo -e "|¯¯¯(¯)_|¯|_ ___|¯|___$phc_ver_str\\n| ¯_/¯|_| ' \\/ _ \\ / -_)$lte_ver_str\\n|_| |_| |_||_\\___/_\\___|$ftl_ver_str\\n ${COL_DARK_GRAY}$scr_line_str${COL_NC}"
fi fi
printFunc " Hostname: " "$sys_name" "$host_info" printFunc " Hostname: " "$sys_name" "$host_info"

View File

@@ -19,13 +19,13 @@ upgrade_gravityDB(){
auditFile="${piholeDir}/auditlog.list" auditFile="${piholeDir}/auditlog.list"
# Get database version # Get database version
version="$(sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")" version="$(pihole-FTL sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")"
if [[ "$version" == "1" ]]; then if [[ "$version" == "1" ]]; then
# This migration script upgrades the gravity.db file by # This migration script upgrades the gravity.db file by
# adding the domain_audit table # adding the domain_audit table
echo -e " ${INFO} Upgrading gravity database from version 1 to 2" echo -e " ${INFO} Upgrading gravity database from version 1 to 2"
sqlite3 "${database}" < "${scriptPath}/1_to_2.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/1_to_2.sql"
version=2 version=2
# Store audit domains in database table # Store audit domains in database table
@@ -40,28 +40,28 @@ upgrade_gravityDB(){
# renaming the regex table to regex_blacklist, and # renaming the regex table to regex_blacklist, and
# creating a new regex_whitelist table + corresponding linking table and views # creating a new regex_whitelist table + corresponding linking table and views
echo -e " ${INFO} Upgrading gravity database from version 2 to 3" echo -e " ${INFO} Upgrading gravity database from version 2 to 3"
sqlite3 "${database}" < "${scriptPath}/2_to_3.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/2_to_3.sql"
version=3 version=3
fi fi
if [[ "$version" == "3" ]]; then if [[ "$version" == "3" ]]; then
# This migration script unifies the formally separated domain # This migration script unifies the formally separated domain
# lists into a single table with a UNIQUE domain constraint # lists into a single table with a UNIQUE domain constraint
echo -e " ${INFO} Upgrading gravity database from version 3 to 4" echo -e " ${INFO} Upgrading gravity database from version 3 to 4"
sqlite3 "${database}" < "${scriptPath}/3_to_4.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/3_to_4.sql"
version=4 version=4
fi fi
if [[ "$version" == "4" ]]; then if [[ "$version" == "4" ]]; then
# This migration script upgrades the gravity and list views # This migration script upgrades the gravity and list views
# implementing necessary changes for per-client blocking # implementing necessary changes for per-client blocking
echo -e " ${INFO} Upgrading gravity database from version 4 to 5" echo -e " ${INFO} Upgrading gravity database from version 4 to 5"
sqlite3 "${database}" < "${scriptPath}/4_to_5.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/4_to_5.sql"
version=5 version=5
fi fi
if [[ "$version" == "5" ]]; then if [[ "$version" == "5" ]]; then
# This migration script upgrades the adlist view # This migration script upgrades the adlist view
# to return an ID used in gravity.sh # to return an ID used in gravity.sh
echo -e " ${INFO} Upgrading gravity database from version 5 to 6" echo -e " ${INFO} Upgrading gravity database from version 5 to 6"
sqlite3 "${database}" < "${scriptPath}/5_to_6.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/5_to_6.sql"
version=6 version=6
fi fi
if [[ "$version" == "6" ]]; then if [[ "$version" == "6" ]]; then
@@ -69,7 +69,7 @@ upgrade_gravityDB(){
# which is automatically associated to all clients not # which is automatically associated to all clients not
# having their own group assignments # having their own group assignments
echo -e " ${INFO} Upgrading gravity database from version 6 to 7" echo -e " ${INFO} Upgrading gravity database from version 6 to 7"
sqlite3 "${database}" < "${scriptPath}/6_to_7.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/6_to_7.sql"
version=7 version=7
fi fi
if [[ "$version" == "7" ]]; then if [[ "$version" == "7" ]]; then
@@ -77,21 +77,21 @@ upgrade_gravityDB(){
# to ensure uniqueness on the group name # to ensure uniqueness on the group name
# We also add date_added and date_modified columns # We also add date_added and date_modified columns
echo -e " ${INFO} Upgrading gravity database from version 7 to 8" echo -e " ${INFO} Upgrading gravity database from version 7 to 8"
sqlite3 "${database}" < "${scriptPath}/7_to_8.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/7_to_8.sql"
version=8 version=8
fi fi
if [[ "$version" == "8" ]]; then if [[ "$version" == "8" ]]; then
# This migration fixes some issues that were introduced # This migration fixes some issues that were introduced
# in the previous migration script. # in the previous migration script.
echo -e " ${INFO} Upgrading gravity database from version 8 to 9" echo -e " ${INFO} Upgrading gravity database from version 8 to 9"
sqlite3 "${database}" < "${scriptPath}/8_to_9.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/8_to_9.sql"
version=9 version=9
fi fi
if [[ "$version" == "9" ]]; then if [[ "$version" == "9" ]]; then
# This migration drops unused tables and creates triggers to remove # This migration drops unused tables and creates triggers to remove
# obsolete groups assignments when the linked items are deleted # obsolete groups assignments when the linked items are deleted
echo -e " ${INFO} Upgrading gravity database from version 9 to 10" echo -e " ${INFO} Upgrading gravity database from version 9 to 10"
sqlite3 "${database}" < "${scriptPath}/9_to_10.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/9_to_10.sql"
version=10 version=10
fi fi
if [[ "$version" == "10" ]]; then if [[ "$version" == "10" ]]; then
@@ -101,31 +101,31 @@ upgrade_gravityDB(){
# to keep the copying process generic (needs the same columns in both the # to keep the copying process generic (needs the same columns in both the
# source and the destination databases). # source and the destination databases).
echo -e " ${INFO} Upgrading gravity database from version 10 to 11" echo -e " ${INFO} Upgrading gravity database from version 10 to 11"
sqlite3 "${database}" < "${scriptPath}/10_to_11.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/10_to_11.sql"
version=11 version=11
fi fi
if [[ "$version" == "11" ]]; then if [[ "$version" == "11" ]]; then
# Rename group 0 from "Unassociated" to "Default" # Rename group 0 from "Unassociated" to "Default"
echo -e " ${INFO} Upgrading gravity database from version 11 to 12" echo -e " ${INFO} Upgrading gravity database from version 11 to 12"
sqlite3 "${database}" < "${scriptPath}/11_to_12.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/11_to_12.sql"
version=12 version=12
fi fi
if [[ "$version" == "12" ]]; then if [[ "$version" == "12" ]]; then
# Add column date_updated to adlist table # Add column date_updated to adlist table
echo -e " ${INFO} Upgrading gravity database from version 12 to 13" echo -e " ${INFO} Upgrading gravity database from version 12 to 13"
sqlite3 "${database}" < "${scriptPath}/12_to_13.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/12_to_13.sql"
version=13 version=13
fi fi
if [[ "$version" == "13" ]]; then if [[ "$version" == "13" ]]; then
# Add columns number and status to adlist table # Add columns number and status to adlist table
echo -e " ${INFO} Upgrading gravity database from version 13 to 14" echo -e " ${INFO} Upgrading gravity database from version 13 to 14"
sqlite3 "${database}" < "${scriptPath}/13_to_14.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/13_to_14.sql"
version=14 version=14
fi fi
if [[ "$version" == "14" ]]; then if [[ "$version" == "14" ]]; then
# Changes the vw_adlist created in 5_to_6 # Changes the vw_adlist created in 5_to_6
echo -e " ${INFO} Upgrading gravity database from version 14 to 15" echo -e " ${INFO} Upgrading gravity database from version 14 to 15"
sqlite3 "${database}" < "${scriptPath}/14_to_15.sql" pihole-FTL sqlite3 "${database}" < "${scriptPath}/14_to_15.sql"
version=15 version=15
fi fi
} }

View File

@@ -16,7 +16,7 @@ GRAVITYDB="${piholeDir}/gravity.db"
# Source pihole-FTL from install script # Source pihole-FTL from install script
pihole_FTL="${piholeDir}/pihole-FTL.conf" pihole_FTL="${piholeDir}/pihole-FTL.conf"
if [[ -f "${pihole_FTL}" ]]; then if [[ -f "${pihole_FTL}" ]]; then
source "${pihole_FTL}" source "${pihole_FTL}"
fi fi
# Set this only after sourcing pihole-FTL.conf as the gravity database path may # Set this only after sourcing pihole-FTL.conf as the gravity database path may
@@ -91,7 +91,8 @@ Options:
-q, --quiet Make output less verbose -q, --quiet Make output less verbose
-h, --help Show this help dialog -h, --help Show this help dialog
-l, --list Display all your ${listname}listed domains -l, --list Display all your ${listname}listed domains
--nuke Removes all entries in a list" --nuke Removes all entries in a list
--comment \"text\" Add a comment to the domain. If adding multiple domains the same comment will be used for all"
exit 0 exit 0
} }
@@ -133,7 +134,7 @@ ProcessDomainList() {
else else
RemoveDomain "${dom}" RemoveDomain "${dom}"
fi fi
done done
} }
AddDomain() { AddDomain() {
@@ -141,23 +142,23 @@ AddDomain() {
domain="$1" domain="$1"
# Is the domain in the list we want to add it to? # Is the domain in the list we want to add it to?
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")" num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")"
requestedListname="$(GetListnameFromTypeId "${typeId}")" requestedListname="$(GetListnameFromTypeId "${typeId}")"
if [[ "${num}" -ne 0 ]]; then if [[ "${num}" -ne 0 ]]; then
existingTypeId="$(sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")" existingTypeId="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")"
if [[ "${existingTypeId}" == "${typeId}" ]]; then if [[ "${existingTypeId}" == "${typeId}" ]]; then
if [[ "${verbose}" == true ]]; then if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${requestedListname}, no need to add!" echo -e " ${INFO} ${1} already exists in ${requestedListname}, no need to add!"
fi
else
existingListname="$(GetListnameFromTypeId "${existingTypeId}")"
pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${existingListname}, it has been moved to ${requestedListname}!"
fi
fi fi
else return
existingListname="$(GetListnameFromTypeId "${existingTypeId}")"
sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${existingListname}, it has been moved to ${requestedListname}!"
fi
fi
return
fi fi
# Domain not found in the table, add it! # Domain not found in the table, add it!
@@ -168,10 +169,10 @@ AddDomain() {
# Insert only the domain here. The enabled and date_added fields will be filled # Insert only the domain here. The enabled and date_added fields will be filled
# with their default values (enabled = true, date_added = current timestamp) # with their default values (enabled = true, date_added = current timestamp)
if [[ -z "${comment}" ]]; then if [[ -z "${comment}" ]]; then
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});" pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});"
else else
# also add comment when variable has been set through the "--comment" option # also add comment when variable has been set through the "--comment" option
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');" pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');"
fi fi
} }
@@ -180,15 +181,15 @@ RemoveDomain() {
domain="$1" domain="$1"
# Is the domain in the list we want to remove it from? # Is the domain in the list we want to remove it from?
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")" num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")"
requestedListname="$(GetListnameFromTypeId "${typeId}")" requestedListname="$(GetListnameFromTypeId "${typeId}")"
if [[ "${num}" -eq 0 ]]; then if [[ "${num}" -eq 0 ]]; then
if [[ "${verbose}" == true ]]; then if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${domain} does not exist in ${requestedListname}, no need to remove!" echo -e " ${INFO} ${domain} does not exist in ${requestedListname}, no need to remove!"
fi fi
return return
fi fi
# Domain found in the table, remove it! # Domain found in the table, remove it!
@@ -197,14 +198,14 @@ RemoveDomain() {
fi fi
reload=true reload=true
# Remove it from the current list # Remove it from the current list
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};" pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};"
} }
Displaylist() { Displaylist() {
local count num_pipes domain enabled status nicedate requestedListname local count num_pipes domain enabled status nicedate requestedListname
requestedListname="$(GetListnameFromTypeId "${typeId}")" requestedListname="$(GetListnameFromTypeId "${typeId}")"
data="$(sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)" data="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)"
if [[ -z $data ]]; then if [[ -z $data ]]; then
echo -e "Not showing empty list" echo -e "Not showing empty list"
@@ -242,10 +243,10 @@ Displaylist() {
} }
NukeList() { NukeList() {
count=$(sqlite3 "${gravityDBfile}" "SELECT COUNT(1) FROM domainlist WHERE type = ${typeId};") count=$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(1) FROM domainlist WHERE type = ${typeId};")
listname="$(GetListnameFromTypeId "${typeId}")" listname="$(GetListnameFromTypeId "${typeId}")"
if [ "$count" -gt 0 ];then if [ "$count" -gt 0 ];then
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};" pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};"
echo " ${TICK} Removed ${count} domain(s) from the ${listname}" echo " ${TICK} Removed ${count} domain(s) from the ${listname}"
else else
echo " ${INFO} ${listname} already empty. Nothing to do!" echo " ${INFO} ${listname} already empty. Nothing to do!"
@@ -256,8 +257,8 @@ NukeList() {
GetComment() { GetComment() {
comment="$1" comment="$1"
if [[ "${comment}" =~ [^a-zA-Z0-9_\#:/\.,\ -] ]]; then if [[ "${comment}" =~ [^a-zA-Z0-9_\#:/\.,\ -] ]]; then
echo " ${CROSS} Found invalid characters in domain comment!" echo " ${CROSS} Found invalid characters in domain comment!"
exit exit
fi fi
} }
@@ -292,7 +293,7 @@ ProcessDomainList
# Used on web interface # Used on web interface
if $web; then if $web; then
echo "DONE" echo "DONE"
fi fi
if [[ ${reload} == true && ${noReloadRequested} == false ]]; then if [[ ${reload} == true && ${noReloadRequested} == false ]]; then

View File

@@ -39,7 +39,7 @@ flushARP(){
# Truncate network_addresses table in pihole-FTL.db # Truncate network_addresses table in pihole-FTL.db
# This needs to be done before we can truncate the network table due to # This needs to be done before we can truncate the network table due to
# foreign key constraints # foreign key constraints
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then if ! output=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then
echo -e "${OVER} ${CROSS} Failed to truncate network_addresses table" echo -e "${OVER} ${CROSS} Failed to truncate network_addresses table"
echo " Database location: ${DBFILE}" echo " Database location: ${DBFILE}"
echo " Output: ${output}" echo " Output: ${output}"
@@ -47,7 +47,7 @@ flushARP(){
fi fi
# Truncate network table in pihole-FTL.db # Truncate network table in pihole-FTL.db
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network" 2>&1); then if ! output=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM network" 2>&1); then
echo -e "${OVER} ${CROSS} Failed to truncate network table" echo -e "${OVER} ${CROSS} Failed to truncate network table"
echo " Database location: ${DBFILE}" echo " Database location: ${DBFILE}"
echo " Output: ${output}" echo " Output: ${output}"

View File

@@ -27,7 +27,7 @@ PIHOLE_COLTABLE_FILE="${PIHOLE_SCRIPTS_DIRECTORY}/COL_TABLE"
# These provide the colors we need for making the log more readable # These provide the colors we need for making the log more readable
if [[ -f ${PIHOLE_COLTABLE_FILE} ]]; then if [[ -f ${PIHOLE_COLTABLE_FILE} ]]; then
source ${PIHOLE_COLTABLE_FILE} source ${PIHOLE_COLTABLE_FILE}
else else
COL_NC='\e[0m' # No Color COL_NC='\e[0m' # No Color
COL_RED='\e[1;91m' COL_RED='\e[1;91m'
@@ -88,6 +88,7 @@ PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list"
PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate" PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate"
PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf" PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf"
PIHOLE_FTL_CONF_FILE="${PIHOLE_DIRECTORY}/pihole-FTL.conf" PIHOLE_FTL_CONF_FILE="${PIHOLE_DIRECTORY}/pihole-FTL.conf"
PIHOLE_CUSTOM_HOSTS_FILE="${PIHOLE_DIRECTORY}/custom.list"
# Read the value of an FTL config key. The value is printed to stdout. # Read the value of an FTL config key. The value is printed to stdout.
# #
@@ -179,7 +180,8 @@ REQUIRED_FILES=("${PIHOLE_CRON_FILE}"
"${PIHOLE_WEB_SERVER_ACCESS_LOG_FILE}" "${PIHOLE_WEB_SERVER_ACCESS_LOG_FILE}"
"${PIHOLE_WEB_SERVER_ERROR_LOG_FILE}" "${PIHOLE_WEB_SERVER_ERROR_LOG_FILE}"
"${RESOLVCONF}" "${RESOLVCONF}"
"${DNSMASQ_CONF}") "${DNSMASQ_CONF}"
"${PIHOLE_CUSTOM_HOSTS_FILE}")
DISCLAIMER="This process collects information from your Pi-hole, and optionally uploads it to a unique and random directory on tricorder.pi-hole.net. DISCLAIMER="This process collects information from your Pi-hole, and optionally uploads it to a unique and random directory on tricorder.pi-hole.net.
@@ -465,6 +467,9 @@ diagnose_operating_system() {
# Display the current test that is running # Display the current test that is running
echo_current_diagnostic "Operating system" echo_current_diagnostic "Operating system"
# If the PIHOLE_DOCKER_TAG variable is set, include this information in the debug output
[ -n "${PIHOLE_DOCKER_TAG}" ] && log_write "${INFO} Pi-hole Docker Container: ${PIHOLE_DOCKER_TAG}"
# If there is a /etc/*release file, it's probably a supported operating system, so we can # If there is a /etc/*release file, it's probably a supported operating system, so we can
if ls /etc/*release 1> /dev/null 2>&1; then if ls /etc/*release 1> /dev/null 2>&1; then
# display the attributes to the user from the function made earlier # display the attributes to the user from the function made earlier
@@ -728,11 +733,11 @@ compare_port_to_service_assigned() {
# If the service is a Pi-hole service, highlight it in green # If the service is a Pi-hole service, highlight it in green
if [[ "${service_name}" == "${expected_service}" ]]; then if [[ "${service_name}" == "${expected_service}" ]]; then
log_write "[${COL_GREEN}${port}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}" log_write "${TICK} ${COL_GREEN}${port}${COL_NC} is in use by ${COL_GREEN}${service_name}${COL_NC}"
# Otherwise, # Otherwise,
else else
# Show the service name in red since it's non-standard # Show the service name in red since it's non-standard
log_write "[${COL_RED}${port}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})" log_write "${CROSS} ${COL_RED}${port}${COL_NC} is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
fi fi
} }
@@ -748,36 +753,47 @@ check_required_ports() {
# Sort the addresses and remove duplicates # Sort the addresses and remove duplicates
while IFS= read -r line; do while IFS= read -r line; do
ports_in_use+=( "$line" ) ports_in_use+=( "$line" )
done < <( lsof -iTCP -sTCP:LISTEN -P -n +c 10 ) done < <( ss --listening --numeric --tcp --udp --processes --no-header )
# Now that we have the values stored, # Now that we have the values stored,
for i in "${!ports_in_use[@]}"; do for i in "${!ports_in_use[@]}"; do
# loop through them and assign some local variables # loop through them and assign some local variables
local service_name local service_name
service_name=$(echo "${ports_in_use[$i]}" | awk '{print $1}') service_name=$(echo "${ports_in_use[$i]}" | awk '{gsub(/users:\(\("/,"",$7);gsub(/".*/,"",$7);print $7}')
local protocol_type local protocol_type
protocol_type=$(echo "${ports_in_use[$i]}" | awk '{print $5}') protocol_type=$(echo "${ports_in_use[$i]}" | awk '{print $1}')
local port_number local port_number
port_number="$(echo "${ports_in_use[$i]}" | awk '{print $9}')" port_number="$(echo "${ports_in_use[$i]}" | awk '{print $5}')" # | awk '{gsub(/^.*:/,"",$5);print $5}')
# Skip the line if it's the titles of the columns the lsof command produces
if [[ "${service_name}" == COMMAND ]]; then
continue
fi
# Use a case statement to determine if the right services are using the right ports # Use a case statement to determine if the right services are using the right ports
case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in case "$(echo "${port_number}" | rev | cut -d: -f1 | rev)" in
53) compare_port_to_service_assigned "${resolver}" "${service_name}" 53 53) compare_port_to_service_assigned "${resolver}" "${service_name}" "${protocol_type}:${port_number}"
;; ;;
80) compare_port_to_service_assigned "${web_server}" "${service_name}" 80 80) compare_port_to_service_assigned "${web_server}" "${service_name}" "${protocol_type}:${port_number}"
;; ;;
4711) compare_port_to_service_assigned "${ftl}" "${service_name}" 4711 4711) compare_port_to_service_assigned "${ftl}" "${service_name}" "${protocol_type}:${port_number}"
;; ;;
# If it's not a default port that Pi-hole needs, just print it out for the user to see # If it's not a default port that Pi-hole needs, just print it out for the user to see
*) log_write "${port_number} ${service_name} (${protocol_type})"; *) log_write " ${protocol_type}:${port_number} is in use by ${service_name:=<unknown>}";
esac esac
done done
} }
ip_command() {
# Obtain and log information from "ip XYZ show" commands
echo_current_diagnostic "${2}"
local entries=()
mapfile -t entries < <(ip "${1}" show)
for line in "${entries[@]}"; do
log_write " ${line}"
done
}
check_ip_command() {
ip_command "addr" "Network interfaces and addresses"
ip_command "route" "Network routing table"
}
check_networking() { check_networking() {
# Runs through several of the functions made earlier; we just clump them # Runs through several of the functions made earlier; we just clump them
# together since they are all related to the networking aspect of things # together since they are all related to the networking aspect of things
@@ -786,7 +802,9 @@ check_networking() {
detect_ip_addresses "6" detect_ip_addresses "6"
ping_gateway "4" ping_gateway "4"
ping_gateway "6" ping_gateway "6"
check_required_ports # Skip the following check if installed in docker container. Unpriv'ed containers do not have access to the information required
# to resolve the service name listening - and the container should not start if there was a port conflict anyway
[ -z "${PIHOLE_DOCKER_TAG}" ] && check_required_ports
} }
check_x_headers() { check_x_headers() {
@@ -870,7 +888,7 @@ dig_at() {
# This helps emulate queries to different domains that a user might query # This helps emulate queries to different domains that a user might query
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains # It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
local random_url local random_url
random_url=$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity ORDER BY RANDOM() LIMIT 1") random_url=$(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity ORDER BY RANDOM() LIMIT 1")
# Next we need to check if Pi-hole can resolve a domain when the query is sent to it's IP address # Next we need to check if Pi-hole can resolve a domain when the query is sent to it's IP address
# This better emulates how clients will interact with Pi-hole as opposed to above where Pi-hole is # This better emulates how clients will interact with Pi-hole as opposed to above where Pi-hole is
@@ -1184,7 +1202,7 @@ show_db_entries() {
IFS=$'\r\n' IFS=$'\r\n'
local entries=() local entries=()
mapfile -t entries < <(\ mapfile -t entries < <(\
sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" \ pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" \
-cmd ".headers on" \ -cmd ".headers on" \
-cmd ".mode column" \ -cmd ".mode column" \
-cmd ".width ${widths}" \ -cmd ".width ${widths}" \
@@ -1209,7 +1227,7 @@ show_FTL_db_entries() {
IFS=$'\r\n' IFS=$'\r\n'
local entries=() local entries=()
mapfile -t entries < <(\ mapfile -t entries < <(\
sqlite3 "${PIHOLE_FTL_DB_FILE}" \ pihole-FTL sqlite3 "${PIHOLE_FTL_DB_FILE}" \
-cmd ".headers on" \ -cmd ".headers on" \
-cmd ".mode column" \ -cmd ".mode column" \
-cmd ".width ${widths}" \ -cmd ".width ${widths}" \
@@ -1266,7 +1284,7 @@ analyze_gravity_list() {
log_write "${COL_GREEN}${gravity_permissions}${COL_NC}" log_write "${COL_GREEN}${gravity_permissions}${COL_NC}"
show_db_entries "Info table" "SELECT property,value FROM info" "20 40" show_db_entries "Info table" "SELECT property,value FROM info" "20 40"
gravity_updated_raw="$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")" gravity_updated_raw="$(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")"
gravity_updated="$(date -d @"${gravity_updated_raw}")" gravity_updated="$(date -d @"${gravity_updated_raw}")"
log_write " Last gravity run finished at: ${COL_CYAN}${gravity_updated}${COL_NC}" log_write " Last gravity run finished at: ${COL_CYAN}${gravity_updated}${COL_NC}"
log_write "" log_write ""
@@ -1274,7 +1292,7 @@ analyze_gravity_list() {
OLD_IFS="$IFS" OLD_IFS="$IFS"
IFS=$'\r\n' IFS=$'\r\n'
local gravity_sample=() local gravity_sample=()
mapfile -t gravity_sample < <(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10") mapfile -t gravity_sample < <(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10")
log_write " ${COL_CYAN}----- First 10 Gravity Domains -----${COL_NC}" log_write " ${COL_CYAN}----- First 10 Gravity Domains -----${COL_NC}"
for line in "${gravity_sample[@]}"; do for line in "${gravity_sample[@]}"; do
@@ -1384,9 +1402,9 @@ upload_to_tricorder() {
log_write "${TICK} ${COL_GREEN}** FINISHED DEBUGGING! **${COL_NC}\\n" log_write "${TICK} ${COL_GREEN}** FINISHED DEBUGGING! **${COL_NC}\\n"
# Provide information on what they should do with their token # Provide information on what they should do with their token
log_write " * The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only." log_write " * The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only."
# If pihole -d is running automatically (usually through the dashboard) # If pihole -d is running automatically
if [[ "${AUTOMATED}" ]]; then if [[ "${AUTOMATED}" ]]; then
# let the user know # let the user know
log_write "${INFO} Debug script running in automated mode" log_write "${INFO} Debug script running in automated mode"
@@ -1394,16 +1412,19 @@ upload_to_tricorder() {
curl_to_tricorder curl_to_tricorder
# If we're not running in automated mode, # If we're not running in automated mode,
else else
echo "" # if not being called from the web interface
# give the user a choice of uploading it or not if [[ ! "${WEBCALL}" ]]; then
# Users can review the log file locally (or the output of the script since they are the same) and try to self-diagnose their problem echo ""
read -r -p "[?] Would you like to upload the log? [y/N] " response # give the user a choice of uploading it or not
case ${response} in # Users can review the log file locally (or the output of the script since they are the same) and try to self-diagnose their problem
# If they say yes, run our function for uploading the log read -r -p "[?] Would you like to upload the log? [y/N] " response
[yY][eE][sS]|[yY]) curl_to_tricorder;; case ${response} in
# If they choose no, just exit out of the script # If they say yes, run our function for uploading the log
*) log_write " * Log will ${COL_GREEN}NOT${COL_NC} be uploaded to tricorder.\\n * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n";exit; [yY][eE][sS]|[yY]) curl_to_tricorder;;
esac # If they choose no, just exit out of the script
*) log_write " * Log will ${COL_GREEN}NOT${COL_NC} be uploaded to tricorder.\\n * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n";exit;
esac
fi
fi fi
# Check if tricorder.pi-hole.net is reachable and provide token # Check if tricorder.pi-hole.net is reachable and provide token
# along with some additional useful information # along with some additional useful information
@@ -1423,8 +1444,13 @@ upload_to_tricorder() {
# If no token was generated # If no token was generated
else else
# Show an error and some help instructions # Show an error and some help instructions
log_write "${CROSS} ${COL_RED}There was an error uploading your debug log.${COL_NC}" # Skip this if being called from web interface and autmatic mode was not chosen (users opt-out to upload)
log_write " * Please try again or contact the Pi-hole team for assistance." if [[ "${WEBCALL}" ]] && [[ ! "${AUTOMATED}" ]]; then
:
else
log_write "${CROSS} ${COL_RED}There was an error uploading your debug log.${COL_NC}"
log_write " * Please try again or contact the Pi-hole team for assistance."
fi
fi fi
# Finally, show where the log file is no matter the outcome of the function so users can look at it # Finally, show where the log file is no matter the outcome of the function so users can look at it
log_write " * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n" log_write " * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n"
@@ -1443,6 +1469,7 @@ check_selinux
check_firewalld check_firewalld
processor_check processor_check
disk_usage disk_usage
check_ip_command
check_networking check_networking
check_name_resolution check_name_resolution
check_dhcp_servers check_dhcp_servers

View File

@@ -63,7 +63,7 @@ else
fi fi
fi fi
# Delete most recent 24 hours from FTL's database, leave even older data intact (don't wipe out all history) # Delete most recent 24 hours from FTL's database, leave even older data intact (don't wipe out all history)
deleted=$(sqlite3 "${DBFILE}" "DELETE FROM queries WHERE timestamp >= strftime('%s','now')-86400; select changes() from queries limit 1") deleted=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM queries WHERE timestamp >= strftime('%s','now')-86400; select changes() from queries limit 1")
# Restart pihole-FTL to force reloading history # Restart pihole-FTL to force reloading history
sudo pihole restartdns sudo pihole restartdns

View File

@@ -21,7 +21,7 @@ matchType="match"
# Source pihole-FTL from install script # Source pihole-FTL from install script
pihole_FTL="${piholeDir}/pihole-FTL.conf" pihole_FTL="${piholeDir}/pihole-FTL.conf"
if [[ -f "${pihole_FTL}" ]]; then if [[ -f "${pihole_FTL}" ]]; then
source "${pihole_FTL}" source "${pihole_FTL}"
fi fi
# Set this only after sourcing pihole-FTL.conf as the gravity database path may # Set this only after sourcing pihole-FTL.conf as the gravity database path may
@@ -48,7 +48,7 @@ scanList(){
# Iterate through each regexp and check whether it matches the domainQuery # Iterate through each regexp and check whether it matches the domainQuery
# If it does, print the matching regexp and continue looping # If it does, print the matching regexp and continue looping
# Input 1 - regexps | Input 2 - domainQuery # Input 1 - regexps | Input 2 - domainQuery
"regex" ) "regex" )
for list in ${lists}; do for list in ${lists}; do
if [[ "${domain}" =~ ${list} ]]; then if [[ "${domain}" =~ ${list} ]]; then
printf "%b\n" "${list}"; printf "%b\n" "${list}";
@@ -109,27 +109,27 @@ scanDatabaseTable() {
# behavior. The "ESCAPE '\'" clause specifies that an underscore preceded by an '\' should be matched # behavior. The "ESCAPE '\'" clause specifies that an underscore preceded by an '\' should be matched
# as a literal underscore character. We pretreat the $domain variable accordingly to escape underscores. # as a literal underscore character. We pretreat the $domain variable accordingly to escape underscores.
if [[ "${table}" == "gravity" ]]; then if [[ "${table}" == "gravity" ]]; then
case "${exact}" in case "${exact}" in
"exact" ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain = '${domain}'";; "exact" ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain = '${domain}'";;
* ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";; * ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
esac esac
else else
case "${exact}" in case "${exact}" in
"exact" ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain = '${domain}'";; "exact" ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain = '${domain}'";;
* ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";; * ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
esac esac
fi fi
# Send prepared query to gravity database # Send prepared query to gravity database
result="$(sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null result="$(pihole-FTL sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null
if [[ -z "${result}" ]]; then if [[ -z "${result}" ]]; then
# Return early when there are no matches in this table # Return early when there are no matches in this table
return return
fi fi
if [[ "${table}" == "gravity" ]]; then if [[ "${table}" == "gravity" ]]; then
echo "${result}" echo "${result}"
return return
fi fi
# Mark domain as having been white-/blacklist matched (global variable) # Mark domain as having been white-/blacklist matched (global variable)
@@ -164,7 +164,7 @@ scanRegexDatabaseTable() {
type="${3:-}" type="${3:-}"
# Query all regex from the corresponding database tables # Query all regex from the corresponding database tables
mapfile -t regexList < <(sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null) mapfile -t regexList < <(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null)
# If we have regexps to process # If we have regexps to process
if [[ "${#regexList[@]}" -ne 0 ]]; then if [[ "${#regexList[@]}" -ne 0 ]]; then
@@ -233,15 +233,15 @@ for result in "${results[@]}"; do
adlistAddress="${extra/|*/}" adlistAddress="${extra/|*/}"
extra="${extra#*|}" extra="${extra#*|}"
if [[ "${extra}" == "0" ]]; then if [[ "${extra}" == "0" ]]; then
extra="(disabled)" extra=" (disabled)"
else else
extra="" extra=""
fi fi
if [[ -n "${blockpage}" ]]; then if [[ -n "${blockpage}" ]]; then
echo "0 ${adlistAddress}" echo "0 ${adlistAddress}"
elif [[ -n "${exact}" ]]; then elif [[ -n "${exact}" ]]; then
echo " - ${adlistAddress} ${extra}" echo " - ${adlistAddress}${extra}"
else else
if [[ ! "${adlistAddress}" == "${adlistAddress_prev:-}" ]]; then if [[ ! "${adlistAddress}" == "${adlistAddress_prev:-}" ]]; then
count="" count=""
@@ -256,7 +256,7 @@ for result in "${results[@]}"; do
[[ "${count}" -gt "${max_count}" ]] && continue [[ "${count}" -gt "${max_count}" ]] && continue
echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}" echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}"
else else
echo " ${match} ${extra}" echo " ${match}${extra}"
fi fi
fi fi
done done

View File

@@ -35,6 +35,7 @@ source "/opt/pihole/COL_TABLE"
GitCheckUpdateAvail() { GitCheckUpdateAvail() {
local directory local directory
local curBranch
directory="${1}" directory="${1}"
curdir=$PWD curdir=$PWD
cd "${directory}" || return cd "${directory}" || return
@@ -42,18 +43,29 @@ GitCheckUpdateAvail() {
# Fetch latest changes in this repo # Fetch latest changes in this repo
git fetch --quiet origin git fetch --quiet origin
# @ alone is a shortcut for HEAD. Older versions of git # Check current branch. If it is master, then check for the latest available tag instead of latest commit.
# need @{0} curBranch=$(git rev-parse --abbrev-ref HEAD)
LOCAL="$(git rev-parse "@{0}")" if [[ "${curBranch}" == "master" ]]; then
# get the latest local tag
LOCAL=$(git describe --abbrev=0 --tags master)
# get the latest tag from remote
REMOTE=$(git describe --abbrev=0 --tags origin/master)
else
# @ alone is a shortcut for HEAD. Older versions of git
# need @{0}
LOCAL="$(git rev-parse "@{0}")"
# The suffix @{upstream} to a branchname
# (short form <branchname>@{u}) refers
# to the branch that the branch specified
# by branchname is set to build on top of#
# (configured with branch.<name>.remote and
# branch.<name>.merge). A missing branchname
# defaults to the current one.
REMOTE="$(git rev-parse "@{upstream}")"
fi
# The suffix @{upstream} to a branchname
# (short form <branchname>@{u}) refers
# to the branch that the branch specified
# by branchname is set to build on top of#
# (configured with branch.<name>.remote and
# branch.<name>.merge). A missing branchname
# defaults to the current one.
REMOTE="$(git rev-parse "@{upstream}")"
if [[ "${#LOCAL}" == 0 ]]; then if [[ "${#LOCAL}" == 0 ]]; then
echo -e "\\n ${COL_LIGHT_RED}Error: Local revision could not be obtained, please contact Pi-hole Support" echo -e "\\n ${COL_LIGHT_RED}Error: Local revision could not be obtained, please contact Pi-hole Support"
@@ -200,7 +212,7 @@ main() {
if [[ "${FTL_update}" == true || "${core_update}" == true ]]; then if [[ "${FTL_update}" == true || "${core_update}" == true ]]; then
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \ ${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \
echo -e "${basicError}" && exit 1 echo -e "${basicError}" && exit 1
fi fi
if [[ "${FTL_update}" == true || "${core_update}" == true || "${web_update}" == true ]]; then if [[ "${FTL_update}" == true || "${core_update}" == true || "${web_update}" == true ]]; then

35
advanced/Scripts/utils.sh Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
# Pi-hole: A black hole for Internet advertisements
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware.
#
# Script to hold utility functions for use in other scripts
#
# This file is copyright under the latest version of the EUPL.
# Please see LICENSE file for your rights under this license.
# Basic Housekeeping rules
# - Functions must be self contained
# - Functions must be added in alphabetical order
# - Functions must be documented
# - New functions must have a test added for them in test/test_any_utils.py
#######################
# Takes three arguments key, value, and file.
# Checks the target file for the existence of the key
# - If it exists, it changes the value
# - If it does not exist, it adds the value
#
# Example usage:
# addOrEditKeyValuePair "BLOCKING_ENABLED" "true" "/etc/pihole/setupVars.conf"
#######################
addOrEditKeyValPair() {
local key="${1}"
local value="${2}"
local file="${3}"
if grep -q "^${key}=" "${file}"; then
sed -i "/^${key}=/c\\${key}=${value}" "${file}"
else
echo "${key}=${value}" >> "${file}"
fi
}

View File

@@ -13,6 +13,10 @@ DEFAULT="-1"
COREGITDIR="/etc/.pihole/" COREGITDIR="/etc/.pihole/"
WEBGITDIR="/var/www/html/admin/" WEBGITDIR="/var/www/html/admin/"
# Source the setupvars config file
# shellcheck disable=SC1091
source /etc/pihole/setupVars.conf
getLocalVersion() { getLocalVersion() {
# FTL requires a different method # FTL requires a different method
if [[ "$1" == "FTL" ]]; then if [[ "$1" == "FTL" ]]; then
@@ -91,10 +95,11 @@ getRemoteVersion(){
#If the above file exists, then we can read from that. Prevents overuse of GitHub API #If the above file exists, then we can read from that. Prevents overuse of GitHub API
if [[ -f "$cachedVersions" ]]; then if [[ -f "$cachedVersions" ]]; then
IFS=' ' read -r -a arrCache < "$cachedVersions" IFS=' ' read -r -a arrCache < "$cachedVersions"
case $daemon in case $daemon in
"pi-hole" ) echo "${arrCache[0]}";; "pi-hole" ) echo "${arrCache[0]}";;
"AdminLTE" ) echo "${arrCache[1]}";; "AdminLTE" ) [[ "${INSTALL_WEB_INTERFACE}" == true ]] && echo "${arrCache[1]}";;
"FTL" ) echo "${arrCache[2]}";; "FTL" ) [[ "${INSTALL_WEB_INTERFACE}" == true ]] && echo "${arrCache[2]}" || echo "${arrCache[1]}";;
esac esac
return 0 return 0
@@ -117,7 +122,7 @@ getLocalBranch(){
local directory="${1}" local directory="${1}"
local branch local branch
# Local FTL btranch is stored in /etc/pihole/ftlbranch # Local FTL btranch is stored in /etc/pihole/ftlbranch
if [[ "$1" == "FTL" ]]; then if [[ "$1" == "FTL" ]]; then
branch="$(pihole-FTL branch)" branch="$(pihole-FTL branch)"
else else
@@ -140,6 +145,11 @@ getLocalBranch(){
} }
versionOutput() { versionOutput() {
if [[ "$1" == "AdminLTE" && "${INSTALL_WEB_INTERFACE}" != true ]]; then
echo " WebAdmin not installed"
return 1
fi
[[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR [[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR
[[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR [[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR
[[ "$1" == "FTL" ]] && GITDIR="FTL" [[ "$1" == "FTL" ]] && GITDIR="FTL"
@@ -166,6 +176,7 @@ versionOutput() {
output="Latest ${1^} hash is $latHash" output="Latest ${1^} hash is $latHash"
else else
errorOutput errorOutput
return 1
fi fi
[[ -n "$output" ]] && echo " $output" [[ -n "$output" ]] && echo " $output"
@@ -177,10 +188,6 @@ errorOutput() {
} }
defaultOutput() { defaultOutput() {
# Source the setupvars config file
# shellcheck disable=SC1091
source /etc/pihole/setupVars.conf
versionOutput "pi-hole" "$@" versionOutput "pi-hole" "$@"
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then

View File

@@ -37,15 +37,16 @@ Example: pihole -a -p password
Set options for the Admin Console Set options for the Admin Console
Options: Options:
-p, password Set Admin Console password -p, password Set Admin Console password
-c, celsius Set Celsius as preferred temperature unit -c, celsius Set Celsius as preferred temperature unit
-f, fahrenheit Set Fahrenheit as preferred temperature unit -f, fahrenheit Set Fahrenheit as preferred temperature unit
-k, kelvin Set Kelvin as preferred temperature unit -k, kelvin Set Kelvin as preferred temperature unit
-e, email Set an administrative contact address for the Block Page -e, email Set an administrative contact address for the Block Page
-h, --help Show this help dialog -h, --help Show this help dialog
-i, interface Specify dnsmasq's interface listening behavior -i, interface Specify dnsmasq's interface listening behavior
-l, privacylevel Set privacy level (0 = lowest, 3 = highest) -l, privacylevel Set privacy level (0 = lowest, 3 = highest)
-t, teleporter Backup configuration as an archive" -t, teleporter Backup configuration as an archive
-t, teleporter myname.tar.gz Backup configuration to archive with name myname.tar.gz as specified"
exit 0 exit 0
} }
@@ -122,14 +123,14 @@ SetWebPassword() {
read -s -r -p "Enter New Password (Blank for no password): " PASSWORD read -s -r -p "Enter New Password (Blank for no password): " PASSWORD
echo "" echo ""
if [ "${PASSWORD}" == "" ]; then if [ "${PASSWORD}" == "" ]; then
change_setting "WEBPASSWORD" "" change_setting "WEBPASSWORD" ""
echo -e " ${TICK} Password Removed" echo -e " ${TICK} Password Removed"
exit 0 exit 0
fi fi
read -s -r -p "Confirm Password: " CONFIRM read -s -r -p "Confirm Password: " CONFIRM
echo "" echo ""
fi fi
if [ "${PASSWORD}" == "${CONFIRM}" ] ; then if [ "${PASSWORD}" == "${CONFIRM}" ] ; then
@@ -199,6 +200,8 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
# Setup interface listening behavior of dnsmasq # Setup interface listening behavior of dnsmasq
delete_dnsmasq_setting "interface" delete_dnsmasq_setting "interface"
delete_dnsmasq_setting "local-service" delete_dnsmasq_setting "local-service"
delete_dnsmasq_setting "except-interface"
delete_dnsmasq_setting "bind-interfaces"
if [[ "${DNSMASQ_LISTENING}" == "all" ]]; then if [[ "${DNSMASQ_LISTENING}" == "all" ]]; then
# Listen on all interfaces, permit all origins # Listen on all interfaces, permit all origins
@@ -207,6 +210,7 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
# Listen only on all interfaces, but only local subnets # Listen only on all interfaces, but only local subnets
add_dnsmasq_setting "local-service" add_dnsmasq_setting "local-service"
else else
# Options "bind" and "single"
# Listen only on one interface # Listen only on one interface
# Use eth0 as fallback interface if interface is missing in setupVars.conf # Use eth0 as fallback interface if interface is missing in setupVars.conf
if [ -z "${PIHOLE_INTERFACE}" ]; then if [ -z "${PIHOLE_INTERFACE}" ]; then
@@ -214,6 +218,11 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
fi fi
add_dnsmasq_setting "interface" "${PIHOLE_INTERFACE}" add_dnsmasq_setting "interface" "${PIHOLE_INTERFACE}"
if [[ "${DNSMASQ_LISTENING}" == "bind" ]]; then
# Really bind to interface
add_dnsmasq_setting "bind-interfaces"
fi
fi fi
if [[ "${CONDITIONAL_FORWARDING}" == true ]]; then if [[ "${CONDITIONAL_FORWARDING}" == true ]]; then
@@ -247,8 +256,8 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
3 ) REV_SERVER_CIDR="${arrRev[0]}.0.0.0/8";; 3 ) REV_SERVER_CIDR="${arrRev[0]}.0.0.0/8";;
esac esac
else else
# Set REV_SERVER_CIDR to whatever value it was set to # Set REV_SERVER_CIDR to whatever value it was set to
REV_SERVER_CIDR="${CONDITIONAL_FORWARDING_REVERSE}" REV_SERVER_CIDR="${CONDITIONAL_FORWARDING_REVERSE}"
fi fi
# If REV_SERVER_CIDR is not converted by the above, then use the REV_SERVER_TARGET variable to derive it # If REV_SERVER_CIDR is not converted by the above, then use the REV_SERVER_TARGET variable to derive it
@@ -371,34 +380,34 @@ ProcessDHCPSettings() {
source "${setupVars}" source "${setupVars}"
if [[ "${DHCP_ACTIVE}" == "true" ]]; then if [[ "${DHCP_ACTIVE}" == "true" ]]; then
interface="${PIHOLE_INTERFACE}" interface="${PIHOLE_INTERFACE}"
# Use eth0 as fallback interface # Use eth0 as fallback interface
if [ -z ${interface} ]; then if [ -z ${interface} ]; then
interface="eth0" interface="eth0"
fi fi
if [[ "${PIHOLE_DOMAIN}" == "" ]]; then if [[ "${PIHOLE_DOMAIN}" == "" ]]; then
PIHOLE_DOMAIN="lan" PIHOLE_DOMAIN="lan"
change_setting "PIHOLE_DOMAIN" "${PIHOLE_DOMAIN}" change_setting "PIHOLE_DOMAIN" "${PIHOLE_DOMAIN}"
fi fi
if [[ "${DHCP_LEASETIME}" == "0" ]]; then if [[ "${DHCP_LEASETIME}" == "0" ]]; then
leasetime="infinite" leasetime="infinite"
elif [[ "${DHCP_LEASETIME}" == "" ]]; then elif [[ "${DHCP_LEASETIME}" == "" ]]; then
leasetime="24" leasetime="24"
change_setting "DHCP_LEASETIME" "${leasetime}" change_setting "DHCP_LEASETIME" "${leasetime}"
elif [[ "${DHCP_LEASETIME}" == "24h" ]]; then elif [[ "${DHCP_LEASETIME}" == "24h" ]]; then
#Installation is affected by known bug, introduced in a previous version. #Installation is affected by known bug, introduced in a previous version.
#This will automatically clean up setupVars.conf and remove the unnecessary "h" #This will automatically clean up setupVars.conf and remove the unnecessary "h"
leasetime="24" leasetime="24"
change_setting "DHCP_LEASETIME" "${leasetime}" change_setting "DHCP_LEASETIME" "${leasetime}"
else else
leasetime="${DHCP_LEASETIME}h" leasetime="${DHCP_LEASETIME}h"
fi fi
# Write settings to file # Write settings to file
echo "############################################################################### echo "###############################################################################
# DHCP SERVER CONFIG FILE AUTOMATICALLY POPULATED BY PI-HOLE WEB INTERFACE. # # DHCP SERVER CONFIG FILE AUTOMATICALLY POPULATED BY PI-HOLE WEB INTERFACE. #
# ANY CHANGES MADE TO THIS FILE WILL BE LOST ON CHANGE # # ANY CHANGES MADE TO THIS FILE WILL BE LOST ON CHANGE #
############################################################################### ###############################################################################
@@ -408,34 +417,34 @@ dhcp-option=option:router,${DHCP_ROUTER}
dhcp-leasefile=/etc/pihole/dhcp.leases dhcp-leasefile=/etc/pihole/dhcp.leases
#quiet-dhcp #quiet-dhcp
" > "${dhcpconfig}" " > "${dhcpconfig}"
chmod 644 "${dhcpconfig}" chmod 644 "${dhcpconfig}"
if [[ "${PIHOLE_DOMAIN}" != "none" ]]; then if [[ "${PIHOLE_DOMAIN}" != "none" ]]; then
echo "domain=${PIHOLE_DOMAIN}" >> "${dhcpconfig}" echo "domain=${PIHOLE_DOMAIN}" >> "${dhcpconfig}"
# When there is a Pi-hole domain set and "Never forward non-FQDNs" is # When there is a Pi-hole domain set and "Never forward non-FQDNs" is
# ticked, we add `local=/domain/` to tell FTL that this domain is purely # ticked, we add `local=/domain/` to tell FTL that this domain is purely
# local and FTL may answer queries from /etc/hosts or DHCP but should # local and FTL may answer queries from /etc/hosts or DHCP but should
# never forward queries on that domain to any upstream servers # never forward queries on that domain to any upstream servers
if [[ "${DNS_FQDN_REQUIRED}" == true ]]; then if [[ "${DNS_FQDN_REQUIRED}" == true ]]; then
echo "local=/${PIHOLE_DOMAIN}/" >> "${dhcpconfig}" echo "local=/${PIHOLE_DOMAIN}/" >> "${dhcpconfig}"
fi
fi fi
fi
# Sourced from setupVars # Sourced from setupVars
# shellcheck disable=SC2154 # shellcheck disable=SC2154
if [[ "${DHCP_rapid_commit}" == "true" ]]; then if [[ "${DHCP_rapid_commit}" == "true" ]]; then
echo "dhcp-rapid-commit" >> "${dhcpconfig}" echo "dhcp-rapid-commit" >> "${dhcpconfig}"
fi fi
if [[ "${DHCP_IPv6}" == "true" ]]; then if [[ "${DHCP_IPv6}" == "true" ]]; then
echo "#quiet-dhcp6 echo "#quiet-dhcp6
#enable-ra #enable-ra
dhcp-option=option6:dns-server,[::] dhcp-option=option6:dns-server,[::]
dhcp-range=::100,::1ff,constructor:${interface},ra-names,slaac,64,3600 dhcp-range=::100,::1ff,constructor:${interface},ra-names,slaac,64,3600
ra-param=*,0,0 ra-param=*,0,0
" >> "${dhcpconfig}" " >> "${dhcpconfig}"
fi fi
else else
if [[ -f "${dhcpconfig}" ]]; then if [[ -f "${dhcpconfig}" ]]; then
@@ -515,13 +524,13 @@ CustomizeAdLists() {
if CheckUrl "${address}"; then if CheckUrl "${address}"; then
if [[ "${args[2]}" == "enable" ]]; then if [[ "${args[2]}" == "enable" ]]; then
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'" pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'"
elif [[ "${args[2]}" == "disable" ]]; then elif [[ "${args[2]}" == "disable" ]]; then
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'" pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'"
elif [[ "${args[2]}" == "add" ]]; then elif [[ "${args[2]}" == "add" ]]; then
sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')" pihole-FTL sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')"
elif [[ "${args[2]}" == "del" ]]; then elif [[ "${args[2]}" == "del" ]]; then
sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'" pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'"
else else
echo "Not permitted" echo "Not permitted"
return 1 return 1
@@ -532,25 +541,6 @@ CustomizeAdLists() {
fi fi
} }
SetPrivacyMode() {
if [[ "${args[2]}" == "true" ]]; then
change_setting "API_PRIVACY_MODE" "true"
else
change_setting "API_PRIVACY_MODE" "false"
fi
}
ResolutionSettings() {
typ="${args[2]}"
state="${args[3]}"
if [[ "${typ}" == "forward" ]]; then
change_setting "API_GET_UPSTREAM_DNS_HOSTNAME" "${state}"
elif [[ "${typ}" == "clients" ]]; then
change_setting "API_GET_CLIENT_HOSTNAME" "${state}"
fi
}
AddDHCPStaticAddress() { AddDHCPStaticAddress() {
mac="${args[2]}" mac="${args[2]}"
ip="${args[3]}" ip="${args[3]}"
@@ -619,12 +609,13 @@ Example: 'pihole -a -i local'
Specify dnsmasq's network interface listening behavior Specify dnsmasq's network interface listening behavior
Interfaces: Interfaces:
local Listen on all interfaces, but only allow queries from local Only respond to queries from devices that
devices that are at most one hop away (local devices) are at most one hop away (local devices)
single Listen only on ${PIHOLE_INTERFACE} interface single Respond only on interface ${PIHOLE_INTERFACE}
bind Bind only on interface ${PIHOLE_INTERFACE}
all Listen on all interfaces, permit all origins" all Listen on all interfaces, permit all origins"
exit 0 exit 0
fi fi
if [[ "${args[2]}" == "all" ]]; then if [[ "${args[2]}" == "all" ]]; then
echo -e " ${INFO} Listening on all interfaces, permitting all origins. Please use a firewall!" echo -e " ${INFO} Listening on all interfaces, permitting all origins. Please use a firewall!"
@@ -632,6 +623,9 @@ Interfaces:
elif [[ "${args[2]}" == "local" ]]; then elif [[ "${args[2]}" == "local" ]]; then
echo -e " ${INFO} Listening on all interfaces, permitting origins from one hop away (LAN)" echo -e " ${INFO} Listening on all interfaces, permitting origins from one hop away (LAN)"
change_setting "DNSMASQ_LISTENING" "local" change_setting "DNSMASQ_LISTENING" "local"
elif [[ "${args[2]}" == "bind" ]]; then
echo -e " ${INFO} Binding on interface ${PIHOLE_INTERFACE}"
change_setting "DNSMASQ_LISTENING" "bind"
else else
echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}" echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}"
change_setting "DNSMASQ_LISTENING" "single" change_setting "DNSMASQ_LISTENING" "single"
@@ -647,12 +641,17 @@ Interfaces:
} }
Teleporter() { Teleporter() {
local datetimestamp local filename
local host filename="${args[2]}"
datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S") if [[ -z "${filename}" ]]; then
host=$(hostname) local datetimestamp
host="${host//./_}" local host
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-${host:-noname}-teleporter_${datetimestamp}.tar.gz" datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S")
host=$(hostname)
host="${host//./_}"
filename="pi-hole-${host:-noname}-teleporter_${datetimestamp}.tar.gz"
fi
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "${filename}"
} }
checkDomain() checkDomain()
@@ -673,27 +672,27 @@ addAudit()
domains="" domains=""
for domain in "$@" for domain in "$@"
do do
# Check domain to be added. Only continue if it is valid # Check domain to be added. Only continue if it is valid
validDomain="$(checkDomain "${domain}")" validDomain="$(checkDomain "${domain}")"
if [[ -n "${validDomain}" ]]; then if [[ -n "${validDomain}" ]]; then
# Put comma in between domains when there is # Put comma in between domains when there is
# more than one domains to be added # more than one domains to be added
# SQL INSERT allows adding multiple rows at once using the format # SQL INSERT allows adding multiple rows at once using the format
## INSERT INTO table (domain) VALUES ('abc.de'),('fgh.ij'),('klm.no'),('pqr.st'); ## INSERT INTO table (domain) VALUES ('abc.de'),('fgh.ij'),('klm.no'),('pqr.st');
if [[ -n "${domains}" ]]; then if [[ -n "${domains}" ]]; then
domains="${domains}," domains="${domains},"
fi
domains="${domains}('${domain}')"
fi fi
domains="${domains}('${domain}')"
fi
done done
# Insert only the domain here. The date_added field will be # Insert only the domain here. The date_added field will be
# filled with its default value (date_added = current timestamp) # filled with its default value (date_added = current timestamp)
sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};" pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};"
} }
clearAudit() clearAudit()
{ {
sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;" pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;"
} }
SetPrivacyLevel() { SetPrivacyLevel() {
@@ -726,7 +725,7 @@ AddCustomDNSAddress() {
# Restart dnsmasq to load new custom DNS entries only if $reload not false # Restart dnsmasq to load new custom DNS entries only if $reload not false
if [[ ! $reload == "false" ]]; then if [[ ! $reload == "false" ]]; then
RestartDNS RestartDNS
fi fi
} }
@@ -740,19 +739,19 @@ RemoveCustomDNSAddress() {
validHost="$(checkDomain "${host}")" validHost="$(checkDomain "${host}")"
if [[ -n "${validHost}" ]]; then if [[ -n "${validHost}" ]]; then
if valid_ip "${ip}" || valid_ip6 "${ip}" ; then if valid_ip "${ip}" || valid_ip6 "${ip}" ; then
sed -i "/^${ip} ${validHost}$/d" "${dnscustomfile}" sed -i "/^${ip} ${validHost}$/Id" "${dnscustomfile}"
else else
echo -e " ${CROSS} Invalid IP has been passed" echo -e " ${CROSS} Invalid IP has been passed"
exit 1 exit 1
fi fi
else else
echo " ${CROSS} Invalid Domain passed!" echo " ${CROSS} Invalid Domain passed!"
exit 1 exit 1
fi fi
# Restart dnsmasq to load new custom DNS entries only if reload is not false # Restart dnsmasq to load new custom DNS entries only if reload is not false
if [[ ! $reload == "false" ]]; then if [[ ! $reload == "false" ]]; then
RestartDNS RestartDNS
fi fi
} }
@@ -767,10 +766,10 @@ AddCustomCNAMERecord() {
if [[ -n "${validDomain}" ]]; then if [[ -n "${validDomain}" ]]; then
validTarget="$(checkDomain "${target}")" validTarget="$(checkDomain "${target}")"
if [[ -n "${validTarget}" ]]; then if [[ -n "${validTarget}" ]]; then
echo "cname=${validDomain},${validTarget}" >> "${dnscustomcnamefile}" echo "cname=${validDomain},${validTarget}" >> "${dnscustomcnamefile}"
else else
echo " ${CROSS} Invalid Target Passed!" echo " ${CROSS} Invalid Target Passed!"
exit 1 exit 1
fi fi
else else
echo " ${CROSS} Invalid Domain passed!" echo " ${CROSS} Invalid Domain passed!"
@@ -778,7 +777,7 @@ AddCustomCNAMERecord() {
fi fi
# Restart dnsmasq to load new custom CNAME records only if reload is not false # Restart dnsmasq to load new custom CNAME records only if reload is not false
if [[ ! $reload == "false" ]]; then if [[ ! $reload == "false" ]]; then
RestartDNS RestartDNS
fi fi
} }
@@ -793,10 +792,10 @@ RemoveCustomCNAMERecord() {
if [[ -n "${validDomain}" ]]; then if [[ -n "${validDomain}" ]]; then
validTarget="$(checkDomain "${target}")" validTarget="$(checkDomain "${target}")"
if [[ -n "${validTarget}" ]]; then if [[ -n "${validTarget}" ]]; then
sed -i "/cname=${validDomain},${validTarget}$/d" "${dnscustomcnamefile}" sed -i "/cname=${validDomain},${validTarget}$/Id" "${dnscustomcnamefile}"
else else
echo " ${CROSS} Invalid Target Passed!" echo " ${CROSS} Invalid Target Passed!"
exit 1 exit 1
fi fi
else else
echo " ${CROSS} Invalid Domain passed!" echo " ${CROSS} Invalid Domain passed!"
@@ -805,7 +804,7 @@ RemoveCustomCNAMERecord() {
# Restart dnsmasq to update removed custom CNAME records only if $reload not false # Restart dnsmasq to update removed custom CNAME records only if $reload not false
if [[ ! $reload == "false" ]]; then if [[ ! $reload == "false" ]]; then
RestartDNS RestartDNS
fi fi
} }
@@ -829,8 +828,6 @@ main() {
"layout" ) SetWebUILayout;; "layout" ) SetWebUILayout;;
"theme" ) SetWebUITheme;; "theme" ) SetWebUITheme;;
"-h" | "--help" ) helpFunc;; "-h" | "--help" ) helpFunc;;
"privacymode" ) SetPrivacyMode;;
"resolve" ) ResolutionSettings;;
"addstaticdhcp" ) AddDHCPStaticAddress;; "addstaticdhcp" ) AddDHCPStaticAddress;;
"removestaticdhcp" ) RemoveDHCPStaticAddress;; "removestaticdhcp" ) RemoveDHCPStaticAddress;;
"-e" | "email" ) SetAdminEmail "$3";; "-e" | "email" ) SetAdminEmail "$3";;

View File

@@ -12,14 +12,17 @@ INSERT OR REPLACE INTO "group" SELECT * FROM OLD."group";
INSERT OR REPLACE INTO domain_audit SELECT * FROM OLD.domain_audit; INSERT OR REPLACE INTO domain_audit SELECT * FROM OLD.domain_audit;
INSERT OR REPLACE INTO domainlist SELECT * FROM OLD.domainlist; INSERT OR REPLACE INTO domainlist SELECT * FROM OLD.domainlist;
DELETE FROM OLD.domainlist_by_group WHERE domainlist_id NOT IN (SELECT id FROM OLD.domainlist);
INSERT OR REPLACE INTO domainlist_by_group SELECT * FROM OLD.domainlist_by_group; INSERT OR REPLACE INTO domainlist_by_group SELECT * FROM OLD.domainlist_by_group;
INSERT OR REPLACE INTO adlist SELECT * FROM OLD.adlist; INSERT OR REPLACE INTO adlist SELECT * FROM OLD.adlist;
DELETE FROM OLD.adlist_by_group WHERE adlist_id NOT IN (SELECT id FROM OLD.adlist);
INSERT OR REPLACE INTO adlist_by_group SELECT * FROM OLD.adlist_by_group; INSERT OR REPLACE INTO adlist_by_group SELECT * FROM OLD.adlist_by_group;
INSERT OR REPLACE INTO info SELECT * FROM OLD.info; INSERT OR REPLACE INTO info SELECT * FROM OLD.info;
INSERT OR REPLACE INTO client SELECT * FROM OLD.client; INSERT OR REPLACE INTO client SELECT * FROM OLD.client;
DELETE FROM OLD.client_by_group WHERE client_id NOT IN (SELECT id FROM OLD.client);
INSERT OR REPLACE INTO client_by_group SELECT * FROM OLD.client_by_group; INSERT OR REPLACE INTO client_by_group SELECT * FROM OLD.client_by_group;

View File

@@ -0,0 +1,2 @@
#; Pi-hole FTL config file
#; Comments should start with #; to avoid issues with PHP and bash reading this file

View File

@@ -21,12 +21,15 @@ start() {
else else
# Touch files to ensure they exist (create if non-existing, preserve if existing) # Touch files to ensure they exist (create if non-existing, preserve if existing)
mkdir -pm 0755 /run/pihole mkdir -pm 0755 /run/pihole
touch /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases [ ! -f /run/pihole-FTL.pid ] && install -m 644 -o pihole -g pihole /dev/null /run/pihole-FTL.pid
[ ! -f /run/pihole-FTL.port ] && install -m 644 -o pihole -g pihole /dev/null /run/pihole-FTL.port
[ ! -f /var/log/pihole-FTL.log ] && install -m 644 -o pihole -g pihole /dev/null /var/log/pihole-FTL.log
[ ! -f /var/log/pihole.log ] && install -m 644 -o pihole -g pihole /dev/null /var/log/pihole.log
[ ! -f /etc/pihole/dhcp.leases ] && install -m 644 -o pihole -g pihole /dev/null /etc/pihole/dhcp.leases
# Ensure that permissions are set so that pihole-FTL can edit all necessary files # Ensure that permissions are set so that pihole-FTL can edit all necessary files
chown pihole:pihole /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases /run/pihole /etc/pihole chown pihole:pihole /run/pihole /etc/pihole /var/log/pihole.log /var/log/pihole.log /etc/pihole/dhcp.leases
chmod 0644 /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases
# Ensure that permissions are set so that pihole-FTL can edit the files. We ignore errors as the file may not (yet) exist # Ensure that permissions are set so that pihole-FTL can edit the files. We ignore errors as the file may not (yet) exist
chmod -f 0644 /etc/pihole/macvendor.db chmod -f 0644 /etc/pihole/macvendor.db /etc/pihole/dhcp.leases /var/log/pihole-FTL.log /var/log/pihole.log
# Chown database files to the user FTL runs as. We ignore errors as the files may not (yet) exist # Chown database files to the user FTL runs as. We ignore errors as the files may not (yet) exist
chown -f pihole:pihole /etc/pihole/pihole-FTL.db /etc/pihole/gravity.db /etc/pihole/macvendor.db chown -f pihole:pihole /etc/pihole/pihole-FTL.db /etc/pihole/gravity.db /etc/pihole/macvendor.db
# Chown database file permissions so that the pihole group (web interface) can edit the file. We ignore errors as the files may not (yet) exist # Chown database file permissions so that the pihole group (web interface) can edit the file. We ignore errors as the files may not (yet) exist

View File

@@ -85,8 +85,8 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
url.access-deny = ("") url.access-deny = ("")
} }
# allow teleporter iframe on settings page # allow teleporter and API qr code iframe on settings page
$HTTP["url"] =~ "/teleporter\.php$" { $HTTP["url"] =~ "/(teleporter|api_token)\.php$" {
$HTTP["referer"] =~ "/admin/settings\.php" { $HTTP["referer"] =~ "/admin/settings\.php" {
setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" ) setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" )
} }

View File

@@ -93,8 +93,8 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
url.access-deny = ("") url.access-deny = ("")
} }
# allow teleporter iframe on settings page # allow teleporter and API qr code iframe on settings page
$HTTP["url"] =~ "/teleporter\.php$" { $HTTP["url"] =~ "/(teleporter|api_token)\.php$" {
$HTTP["referer"] =~ "/admin/settings\.php" { $HTTP["referer"] =~ "/admin/settings\.php" {
setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" ) setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" )
} }

View File

@@ -2,7 +2,7 @@
# shellcheck disable=SC1090 # shellcheck disable=SC1090
# Pi-hole: A black hole for Internet advertisements # Pi-hole: A black hole for Internet advertisements
# (c) 2017-2018 Pi-hole, LLC (https://pi-hole.net) # (c) 2017-2021 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware. # Network-wide ad blocking via your own hardware.
# #
# Installs and Updates Pi-hole # Installs and Updates Pi-hole
@@ -76,7 +76,7 @@ PI_HOLE_CONFIG_DIR="/etc/pihole"
PI_HOLE_BIN_DIR="/usr/local/bin" PI_HOLE_BIN_DIR="/usr/local/bin"
PI_HOLE_BLOCKPAGE_DIR="${webroot}/pihole" PI_HOLE_BLOCKPAGE_DIR="${webroot}/pihole"
if [ -z "$useUpdateVars" ]; then if [ -z "$useUpdateVars" ]; then
useUpdateVars=false useUpdateVars=false
fi fi
adlistFile="/etc/pihole/adlists.list" adlistFile="/etc/pihole/adlists.list"
@@ -90,7 +90,7 @@ PRIVACY_LEVEL=0
CACHE_SIZE=10000 CACHE_SIZE=10000
if [ -z "${USER}" ]; then if [ -z "${USER}" ]; then
USER="$(id -un)" USER="$(id -un)"
fi fi
# whiptail dialog dimensions: 20 rows and 70 chars width assures to fit on small screens and is known to hold all content. # whiptail dialog dimensions: 20 rows and 70 chars width assures to fit on small screens and is known to hold all content.
@@ -133,7 +133,7 @@ fi
# A simple function that just echoes out our logo in ASCII format # A simple function that just echoes out our logo in ASCII format
# This lets users know that it is a Pi-hole, LLC product # This lets users know that it is a Pi-hole, LLC product
show_ascii_berry() { show_ascii_berry() {
echo -e " echo -e "
${COL_LIGHT_GREEN}.;;,. ${COL_LIGHT_GREEN}.;;,.
.ccccc:,. .ccccc:,.
:cccclll:. ..,, :cccclll:. ..,,
@@ -172,7 +172,7 @@ os_check() {
local remote_os_domain valid_os valid_version valid_response detected_os detected_version display_warning cmdResult digReturnCode response local remote_os_domain valid_os valid_version valid_response detected_os detected_version display_warning cmdResult digReturnCode response
remote_os_domain=${OS_CHECK_DOMAIN_NAME:-"versions.pi-hole.net"} remote_os_domain=${OS_CHECK_DOMAIN_NAME:-"versions.pi-hole.net"}
detected_os=$(grep "\bID\b" /etc/os-release | cut -d '=' -f2 | tr -d '"') detected_os=$(grep '^ID=' /etc/os-release | cut -d '=' -f2 | tr -d '"')
detected_version=$(grep VERSION_ID /etc/os-release | cut -d '=' -f2 | tr -d '"') detected_version=$(grep VERSION_ID /etc/os-release | cut -d '=' -f2 | tr -d '"')
cmdResult="$(dig +short -t txt "${remote_os_domain}" @ns1.pi-hole.net 2>&1; echo $?)" cmdResult="$(dig +short -t txt "${remote_os_domain}" @ns1.pi-hole.net 2>&1; echo $?)"
@@ -261,174 +261,174 @@ os_check() {
# Compatibility # Compatibility
package_manager_detect() { package_manager_detect() {
# First check to see if apt-get is installed. # First check to see if apt-get is installed.
if is_command apt-get ; then if is_command apt-get ; then
# Set some global variables here # Set some global variables here
# We don't set them earlier since the installed package manager might be rpm, so these values would be different # We don't set them earlier since the installed package manager might be rpm, so these values would be different
PKG_MANAGER="apt-get" PKG_MANAGER="apt-get"
# A variable to store the command used to update the package cache # A variable to store the command used to update the package cache
UPDATE_PKG_CACHE="${PKG_MANAGER} update" UPDATE_PKG_CACHE="${PKG_MANAGER} update"
# The command we will use to actually install packages # The command we will use to actually install packages
PKG_INSTALL=("${PKG_MANAGER}" -qq --no-install-recommends install) PKG_INSTALL=("${PKG_MANAGER}" -qq --no-install-recommends install)
# grep -c will return 1 if there are no matches. This is an acceptable condition, so we OR TRUE to prevent set -e exiting the script. # grep -c will return 1 if there are no matches. This is an acceptable condition, so we OR TRUE to prevent set -e exiting the script.
PKG_COUNT="${PKG_MANAGER} -s -o Debug::NoLocking=true upgrade | grep -c ^Inst || true" PKG_COUNT="${PKG_MANAGER} -s -o Debug::NoLocking=true upgrade | grep -c ^Inst || true"
# Update package cache # Update package cache
update_package_cache || exit 1 update_package_cache || exit 1
# Check for and determine version number (major and minor) of current php install # Check for and determine version number (major and minor) of current php install
local phpVer="php" local phpVer="php"
if is_command php ; then if is_command php ; then
printf " %b Existing PHP installation detected : PHP version %s\\n" "${INFO}" "$(php <<< "<?php echo PHP_VERSION ?>")" printf " %b Existing PHP installation detected : PHP version %s\\n" "${INFO}" "$(php <<< "<?php echo PHP_VERSION ?>")"
printf -v phpInsMajor "%d" "$(php <<< "<?php echo PHP_MAJOR_VERSION ?>")" printf -v phpInsMajor "%d" "$(php <<< "<?php echo PHP_MAJOR_VERSION ?>")"
printf -v phpInsMinor "%d" "$(php <<< "<?php echo PHP_MINOR_VERSION ?>")" printf -v phpInsMinor "%d" "$(php <<< "<?php echo PHP_MINOR_VERSION ?>")"
phpVer="php$phpInsMajor.$phpInsMinor" phpVer="php$phpInsMajor.$phpInsMinor"
fi fi
# Packages required to perfom the os_check (stored as an array) # Packages required to perfom the os_check (stored as an array)
OS_CHECK_DEPS=(grep dnsutils) OS_CHECK_DEPS=(grep dnsutils)
# Packages required to run this install script (stored as an array) # Packages required to run this install script (stored as an array)
INSTALLER_DEPS=(git iproute2 whiptail ca-certificates) INSTALLER_DEPS=(git iproute2 whiptail ca-certificates)
# Packages required to run Pi-hole (stored as an array) # Packages required to run Pi-hole (stored as an array)
PIHOLE_DEPS=(cron curl iputils-ping lsof psmisc sudo unzip idn2 sqlite3 libcap2-bin dns-root-data libcap2) PIHOLE_DEPS=(cron curl iputils-ping psmisc sudo unzip idn2 libcap2-bin dns-root-data libcap2 netcat-openbsd)
# Packages required for the Web admin interface (stored as an array) # Packages required for the Web admin interface (stored as an array)
# It's useful to separate this from Pi-hole, since the two repos are also setup separately # It's useful to separate this from Pi-hole, since the two repos are also setup separately
PIHOLE_WEB_DEPS=(lighttpd "${phpVer}-common" "${phpVer}-cgi" "${phpVer}-sqlite3" "${phpVer}-xml" "${phpVer}-intl") PIHOLE_WEB_DEPS=(lighttpd "${phpVer}-common" "${phpVer}-cgi" "${phpVer}-sqlite3" "${phpVer}-xml" "${phpVer}-intl")
# Prior to PHP8.0, JSON functionality is provided as dedicated module, required by Pi-hole AdminLTE: https://www.php.net/manual/json.installation.php # Prior to PHP8.0, JSON functionality is provided as dedicated module, required by Pi-hole AdminLTE: https://www.php.net/manual/json.installation.php
if [[ -z "${phpInsMajor}" || "${phpInsMajor}" -lt 8 ]]; then if [[ -z "${phpInsMajor}" || "${phpInsMajor}" -lt 8 ]]; then
PIHOLE_WEB_DEPS+=("${phpVer}-json") PIHOLE_WEB_DEPS+=("${phpVer}-json")
fi fi
# The Web server user, # The Web server user,
LIGHTTPD_USER="www-data" LIGHTTPD_USER="www-data"
# group, # group,
LIGHTTPD_GROUP="www-data" LIGHTTPD_GROUP="www-data"
# and config file # and config file
LIGHTTPD_CFG="lighttpd.conf.debian" LIGHTTPD_CFG="lighttpd.conf.debian"
# This function waits for dpkg to unlock, which signals that the previous apt-get command has finished. # This function waits for dpkg to unlock, which signals that the previous apt-get command has finished.
test_dpkg_lock() { test_dpkg_lock() {
i=0 i=0
# fuser is a program to show which processes use the named files, sockets, or filesystems # fuser is a program to show which processes use the named files, sockets, or filesystems
# So while the lock is held, # So while the lock is held,
while fuser /var/lib/dpkg/lock >/dev/null 2>&1 while fuser /var/lib/dpkg/lock >/dev/null 2>&1
do do
# we wait half a second, # we wait half a second,
sleep 0.5 sleep 0.5
# increase the iterator, # increase the iterator,
((i=i+1)) ((i=i+1))
done done
# and then report success once dpkg is unlocked. # and then report success once dpkg is unlocked.
return 0 return 0
} }
# If apt-get is not found, check for rpm. # If apt-get is not found, check for rpm.
elif is_command rpm ; then elif is_command rpm ; then
# Then check if dnf or yum is the package manager # Then check if dnf or yum is the package manager
if is_command dnf ; then if is_command dnf ; then
PKG_MANAGER="dnf" PKG_MANAGER="dnf"
else
PKG_MANAGER="yum"
fi
# These variable names match the ones for apt-get. See above for an explanation of what they are for.
PKG_INSTALL=("${PKG_MANAGER}" install -y)
PKG_COUNT="${PKG_MANAGER} check-update | egrep '(.i686|.x86|.noarch|.arm|.src)' | wc -l"
OS_CHECK_DEPS=(grep bind-utils)
INSTALLER_DEPS=(git iproute newt procps-ng which chkconfig ca-certificates)
PIHOLE_DEPS=(cronie curl findutils sudo unzip libidn2 psmisc libcap nmap-ncat)
PIHOLE_WEB_DEPS=(lighttpd lighttpd-fastcgi php-common php-cli php-pdo php-xml php-json php-intl)
LIGHTTPD_USER="lighttpd"
LIGHTTPD_GROUP="lighttpd"
LIGHTTPD_CFG="lighttpd.conf.fedora"
# If neither apt-get or yum/dnf package managers were found
else else
PKG_MANAGER="yum" # we cannot install required packages
printf " %b No supported package manager found\\n" "${CROSS}"
# so exit the installer
exit
fi fi
# These variable names match the ones for apt-get. See above for an explanation of what they are for.
PKG_INSTALL=("${PKG_MANAGER}" install -y)
PKG_COUNT="${PKG_MANAGER} check-update | egrep '(.i686|.x86|.noarch|.arm|.src)' | wc -l"
OS_CHECK_DEPS=(grep bind-utils)
INSTALLER_DEPS=(git iproute newt procps-ng which chkconfig ca-certificates)
PIHOLE_DEPS=(cronie curl findutils sudo unzip libidn2 psmisc sqlite libcap lsof)
PIHOLE_WEB_DEPS=(lighttpd lighttpd-fastcgi php-common php-cli php-pdo php-xml php-json php-intl)
LIGHTTPD_USER="lighttpd"
LIGHTTPD_GROUP="lighttpd"
LIGHTTPD_CFG="lighttpd.conf.fedora"
# If neither apt-get or yum/dnf package managers were found
else
# we cannot install required packages
printf " %b No supported package manager found\\n" "${CROSS}"
# so exit the installer
exit
fi
} }
select_rpm_php(){ select_rpm_php(){
# If the host OS is Fedora, # If the host OS is Fedora,
if grep -qiE 'fedora|fedberry' /etc/redhat-release; then if grep -qiE 'fedora|fedberry' /etc/redhat-release; then
# all required packages should be available by default with the latest fedora release # all required packages should be available by default with the latest fedora release
: # continue : # continue
# or if host OS is CentOS, # or if host OS is CentOS,
elif grep -qiE 'centos|scientific' /etc/redhat-release; then elif grep -qiE 'centos|scientific' /etc/redhat-release; then
# Pi-Hole currently supports CentOS 7+ with PHP7+ # Pi-Hole currently supports CentOS 7+ with PHP7+
SUPPORTED_CENTOS_VERSION=7 SUPPORTED_CENTOS_VERSION=7
SUPPORTED_CENTOS_PHP_VERSION=7 SUPPORTED_CENTOS_PHP_VERSION=7
# Check current CentOS major release version # Check current CentOS major release version
CURRENT_CENTOS_VERSION=$(grep -oP '(?<= )[0-9]+(?=\.?)' /etc/redhat-release) CURRENT_CENTOS_VERSION=$(grep -oP '(?<= )[0-9]+(?=\.?)' /etc/redhat-release)
# Check if CentOS version is supported # Check if CentOS version is supported
if [[ $CURRENT_CENTOS_VERSION -lt $SUPPORTED_CENTOS_VERSION ]]; then if [[ $CURRENT_CENTOS_VERSION -lt $SUPPORTED_CENTOS_VERSION ]]; then
printf " %b CentOS %s is not supported.\\n" "${CROSS}" "${CURRENT_CENTOS_VERSION}" printf " %b CentOS %s is not supported.\\n" "${CROSS}" "${CURRENT_CENTOS_VERSION}"
printf " Please update to CentOS release %s or later.\\n" "${SUPPORTED_CENTOS_VERSION}" printf " Please update to CentOS release %s or later.\\n" "${SUPPORTED_CENTOS_VERSION}"
# exit the installer # exit the installer
exit exit
fi fi
# php-json is not required on CentOS 7 as it is already compiled into php # php-json is not required on CentOS 7 as it is already compiled into php
# verifiy via `php -m | grep json` # verifiy via `php -m | grep json`
if [[ $CURRENT_CENTOS_VERSION -eq 7 ]]; then if [[ $CURRENT_CENTOS_VERSION -eq 7 ]]; then
# create a temporary array as arrays are not designed for use as mutable data structures # create a temporary array as arrays are not designed for use as mutable data structures
CENTOS7_PIHOLE_WEB_DEPS=() CENTOS7_PIHOLE_WEB_DEPS=()
for i in "${!PIHOLE_WEB_DEPS[@]}"; do for i in "${!PIHOLE_WEB_DEPS[@]}"; do
if [[ ${PIHOLE_WEB_DEPS[i]} != "php-json" ]]; then if [[ ${PIHOLE_WEB_DEPS[i]} != "php-json" ]]; then
CENTOS7_PIHOLE_WEB_DEPS+=( "${PIHOLE_WEB_DEPS[i]}" ) CENTOS7_PIHOLE_WEB_DEPS+=( "${PIHOLE_WEB_DEPS[i]}" )
fi fi
done done
# re-assign the clean dependency array back to PIHOLE_WEB_DEPS # re-assign the clean dependency array back to PIHOLE_WEB_DEPS
PIHOLE_WEB_DEPS=("${CENTOS7_PIHOLE_WEB_DEPS[@]}") PIHOLE_WEB_DEPS=("${CENTOS7_PIHOLE_WEB_DEPS[@]}")
unset CENTOS7_PIHOLE_WEB_DEPS unset CENTOS7_PIHOLE_WEB_DEPS
fi fi
# CentOS requires the EPEL repository to gain access to Fedora packages # CentOS requires the EPEL repository to gain access to Fedora packages
EPEL_PKG="epel-release" EPEL_PKG="epel-release"
rpm -q ${EPEL_PKG} &> /dev/null || rc=$? rpm -q ${EPEL_PKG} &> /dev/null || rc=$?
if [[ $rc -ne 0 ]]; then if [[ $rc -ne 0 ]]; then
printf " %b Enabling EPEL package repository (https://fedoraproject.org/wiki/EPEL)\\n" "${INFO}" printf " %b Enabling EPEL package repository (https://fedoraproject.org/wiki/EPEL)\\n" "${INFO}"
"${PKG_INSTALL[@]}" ${EPEL_PKG} &> /dev/null "${PKG_INSTALL[@]}" ${EPEL_PKG} &> /dev/null
printf " %b Installed %s\\n" "${TICK}" "${EPEL_PKG}" printf " %b Installed %s\\n" "${TICK}" "${EPEL_PKG}"
fi fi
# The default php on CentOS 7.x is 5.4 which is EOL # The default php on CentOS 7.x is 5.4 which is EOL
# Check if the version of PHP available via installed repositories is >= to PHP 7 # Check if the version of PHP available via installed repositories is >= to PHP 7
AVAILABLE_PHP_VERSION=$("${PKG_MANAGER}" info php | grep -i version | grep -o '[0-9]\+' | head -1) AVAILABLE_PHP_VERSION=$("${PKG_MANAGER}" info php | grep -i version | grep -o '[0-9]\+' | head -1)
if [[ $AVAILABLE_PHP_VERSION -ge $SUPPORTED_CENTOS_PHP_VERSION ]]; then if [[ $AVAILABLE_PHP_VERSION -ge $SUPPORTED_CENTOS_PHP_VERSION ]]; then
# Since PHP 7 is available by default, install via default PHP package names # Since PHP 7 is available by default, install via default PHP package names
: # do nothing as PHP is current : # do nothing as PHP is current
else
REMI_PKG="remi-release"
REMI_REPO="remi-php72"
rpm -q ${REMI_PKG} &> /dev/null || rc=$?
if [[ $rc -ne 0 ]]; then
# The PHP version available via default repositories is older than version 7
if ! whiptail --defaultno --title "PHP 7 Update (recommended)" --yesno "PHP 7.x is recommended for both security and language features.\\nWould you like to install PHP7 via Remi's RPM repository?\\n\\nSee: https://rpms.remirepo.net for more information" "${r}" "${c}"; then
# User decided to NOT update PHP from REMI, attempt to install the default available PHP version
printf " %b User opt-out of PHP 7 upgrade on CentOS. Deprecated PHP may be in use.\\n" "${INFO}"
: # continue with unsupported php version
else else
printf " %b Enabling Remi's RPM repository (https://rpms.remirepo.net)\\n" "${INFO}" REMI_PKG="remi-release"
"${PKG_INSTALL[@]}" "https://rpms.remirepo.net/enterprise/${REMI_PKG}-$(rpm -E '%{rhel}').rpm" &> /dev/null REMI_REPO="remi-php72"
# enable the PHP 7 repository via yum-config-manager (provided by yum-utils) rpm -q ${REMI_PKG} &> /dev/null || rc=$?
"${PKG_INSTALL[@]}" "yum-utils" &> /dev/null if [[ $rc -ne 0 ]]; then
yum-config-manager --enable ${REMI_REPO} &> /dev/null # The PHP version available via default repositories is older than version 7
printf " %b Remi's RPM repository has been enabled for PHP7\\n" "${TICK}" if ! whiptail --defaultno --title "PHP 7 Update (recommended)" --yesno "PHP 7.x is recommended for both security and language features.\\nWould you like to install PHP7 via Remi's RPM repository?\\n\\nSee: https://rpms.remirepo.net for more information" "${r}" "${c}"; then
# trigger an install/update of PHP to ensure previous version of PHP is updated from REMI # User decided to NOT update PHP from REMI, attempt to install the default available PHP version
if "${PKG_INSTALL[@]}" "php-cli" &> /dev/null; then printf " %b User opt-out of PHP 7 upgrade on CentOS. Deprecated PHP may be in use.\\n" "${INFO}"
printf " %b PHP7 installed/updated via Remi's RPM repository\\n" "${TICK}" : # continue with unsupported php version
else
printf " %b Enabling Remi's RPM repository (https://rpms.remirepo.net)\\n" "${INFO}"
"${PKG_INSTALL[@]}" "https://rpms.remirepo.net/enterprise/${REMI_PKG}-$(rpm -E '%{rhel}').rpm" &> /dev/null
# enable the PHP 7 repository via yum-config-manager (provided by yum-utils)
"${PKG_INSTALL[@]}" "yum-utils" &> /dev/null
yum-config-manager --enable ${REMI_REPO} &> /dev/null
printf " %b Remi's RPM repository has been enabled for PHP7\\n" "${TICK}"
# trigger an install/update of PHP to ensure previous version of PHP is updated from REMI
if "${PKG_INSTALL[@]}" "php-cli" &> /dev/null; then
printf " %b PHP7 installed/updated via Remi's RPM repository\\n" "${TICK}"
else
printf " %b There was a problem updating to PHP7 via Remi's RPM repository\\n" "${CROSS}"
exit 1
fi
fi
fi # Warn user of unsupported version of Fedora or CentOS
if ! whiptail --defaultno --title "Unsupported RPM based distribution" --yesno "Would you like to continue installation on an unsupported RPM based distribution?\\n\\nPlease ensure the following packages have been installed manually:\\n\\n- lighttpd\\n- lighttpd-fastcgi\\n- PHP version 7+" "${r}" "${c}"; then
printf " %b Aborting installation due to unsupported RPM based distribution\\n" "${CROSS}"
exit
else else
printf " %b There was a problem updating to PHP7 via Remi's RPM repository\\n" "${CROSS}" printf " %b Continuing installation with unsupported RPM based distribution\\n" "${INFO}"
exit 1
fi fi
fi fi
fi # Warn user of unsupported version of Fedora or CentOS
if ! whiptail --defaultno --title "Unsupported RPM based distribution" --yesno "Would you like to continue installation on an unsupported RPM based distribution?\\n\\nPlease ensure the following packages have been installed manually:\\n\\n- lighttpd\\n- lighttpd-fastcgi\\n- PHP version 7+" "${r}" "${c}"; then
printf " %b Aborting installation due to unsupported RPM based distribution\\n" "${CROSS}"
exit
else
printf " %b Continuing installation with unsupported RPM based distribution\\n" "${INFO}"
fi fi
fi
fi
} }
# A function for checking if a directory is a git repository # A function for checking if a directory is a git repository
@@ -519,7 +519,7 @@ update_repo() {
# In case extra commits have been added after tagging/release (i.e in case of metadata updates/README.MD tweaks) # In case extra commits have been added after tagging/release (i.e in case of metadata updates/README.MD tweaks)
curBranch=$(git rev-parse --abbrev-ref HEAD) curBranch=$(git rev-parse --abbrev-ref HEAD)
if [[ "${curBranch}" == "master" ]]; then if [[ "${curBranch}" == "master" ]]; then
git reset --hard "$(git describe --abbrev=0 --tags)" || return $? git reset --hard "$(git describe --abbrev=0 --tags)" || return $?
fi fi
# Show a completion message # Show a completion message
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
@@ -629,12 +629,12 @@ welcomeDialogs() {
IMPORTANT: If you have not already done so, you must ensure that this device has a static IP. Either through DHCP reservation, or by manually assigning one. Depending on your operating system, there are many ways to achieve this. IMPORTANT: If you have not already done so, you must ensure that this device has a static IP. Either through DHCP reservation, or by manually assigning one. Depending on your operating system, there are many ways to achieve this.
Choose yes to indicate that you have understood this message, and wish to continue" "${r}" "${c}"; then Choose yes to indicate that you have understood this message, and wish to continue" "${r}" "${c}"; then
#Nothing to do, continue #Nothing to do, continue
echo echo
else else
printf " %b Installer exited at static IP message.\\n" "${INFO}" printf " %b Installer exited at static IP message.\\n" "${INFO}"
exit 1 exit 1
fi fi
} }
# A function that lets the user pick an interface to use with Pi-hole # A function that lets the user pick an interface to use with Pi-hole
@@ -674,7 +674,7 @@ chooseInterface() {
# Feed the available interfaces into this while loop # Feed the available interfaces into this while loop
done <<< "${availableInterfaces}" done <<< "${availableInterfaces}"
# The whiptail command that will be run, stored in a variable # The whiptail command that will be run, stored in a variable
chooseInterfaceCmd=(whiptail --separate-output --radiolist "Choose An Interface (press space to toggle selection)" "${r}" "${c}" "${interfaceCount}") chooseInterfaceCmd=(whiptail --separate-output --radiolist "Choose An Interface (press space to toggle selection)" "${r}" "${c}" 6)
# Now run the command using the interfaces saved into the array # Now run the command using the interfaces saved into the array
chooseInterfaceOptions=$("${chooseInterfaceCmd[@]}" "${interfacesArray[@]}" 2>&1 >/dev/tty) || \ chooseInterfaceOptions=$("${chooseInterfaceCmd[@]}" "${interfacesArray[@]}" 2>&1 >/dev/tty) || \
# If the user chooses Cancel, exit # If the user chooses Cancel, exit
@@ -759,9 +759,8 @@ collect_v4andv6_information() {
printf " %b IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}" printf " %b IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}"
# if `dhcpcd` is used offer to set this as static IP for the device # if `dhcpcd` is used offer to set this as static IP for the device
if [[ -f "/etc/dhcpcd.conf" ]]; then if [[ -f "/etc/dhcpcd.conf" ]]; then
# configure networking via dhcpcd # configure networking via dhcpcd
getStaticIPv4Settings getStaticIPv4Settings
setDHCPCD
fi fi
find_IPv6_information find_IPv6_information
printf " %b IPv6 address: %s\\n" "${INFO}" "${IPV6_ADDRESS}" printf " %b IPv6 address: %s\\n" "${INFO}" "${IPV6_ADDRESS}"
@@ -770,47 +769,59 @@ collect_v4andv6_information() {
getStaticIPv4Settings() { getStaticIPv4Settings() {
# Local, named variables # Local, named variables
local ipSettingsCorrect local ipSettingsCorrect
local DHCPChoice
# Ask if the user wants to use DHCP settings as their static IP # Ask if the user wants to use DHCP settings as their static IP
# This is useful for users that are using DHCP reservations; then we can just use the information gathered via our functions # This is useful for users that are using DHCP reservations; then we can just use the information gathered via our functions
if whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --yesno "Do you want to use your current network settings as a static address? DHCPChoice=$(whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --menu --separate-output "Do you want to use your current network settings as a static address? \\n
IP address: ${IPV4_ADDRESS} IP address: ${IPV4_ADDRESS} \\n
Gateway: ${IPv4gw}" "${r}" "${c}"; then Gateway: ${IPv4gw} \\n" "${r}" "${c}" 3\
"Yes" "Set static IP using current values" \
"No" "Set static IP using custom values" \
"Skip" "I will set a static IP later, or have already done so" 3>&2 2>&1 1>&3) || \
{ printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; }
case ${DHCPChoice} in
"Yes")
# If they choose yes, let the user know that the IP address will not be available via DHCP and may cause a conflict. # If they choose yes, let the user know that the IP address will not be available via DHCP and may cause a conflict.
whiptail --msgbox --backtitle "IP information" --title "FYI: IP Conflict" "It is possible your router could still try to assign this IP to a device, which would cause a conflict. But in most cases the router is smart enough to not do that. whiptail --msgbox --backtitle "IP information" --title "FYI: IP Conflict" "It is possible your router could still try to assign this IP to a device, which would cause a conflict. But in most cases the router is smart enough to not do that.
If you are worried, either manually set the address, or modify the DHCP reservation pool so it does not include the IP you want. If you are worried, either manually set the address, or modify the DHCP reservation pool so it does not include the IP you want.
It is also possible to use a DHCP reservation, but if you are going to do that, you might as well set a static address." "${r}" "${c}" It is also possible to use a DHCP reservation, but if you are going to do that, you might as well set a static address." "${r}" "${c}"
# Nothing else to do since the variables are already set above # Nothing else to do since the variables are already set above
else setDHCPCD
# Otherwise, we need to ask the user to input their desired settings. ;;
# Start by getting the IPv4 address (pre-filling it with info gathered from DHCP)
# Start a loop to let the user enter their information with the chance to go back and edit it if necessary
until [[ "${ipSettingsCorrect}" = True ]]; do
# Ask for the IPv4 address "No")
IPV4_ADDRESS=$(whiptail --backtitle "Calibrating network interface" --title "IPv4 address" --inputbox "Enter your desired IPv4 address" "${r}" "${c}" "${IPV4_ADDRESS}" 3>&1 1>&2 2>&3) || \ # Otherwise, we need to ask the user to input their desired settings.
# Canceling IPv4 settings window # Start by getting the IPv4 address (pre-filling it with info gathered from DHCP)
{ ipSettingsCorrect=False; echo -e " ${COL_LIGHT_RED}Cancel was selected, exiting installer${COL_NC}"; exit 1; } # Start a loop to let the user enter their information with the chance to go back and edit it if necessary
printf " %b Your static IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}" until [[ "${ipSettingsCorrect}" = True ]]; do
# Ask for the gateway # Ask for the IPv4 address
IPv4gw=$(whiptail --backtitle "Calibrating network interface" --title "IPv4 gateway (router)" --inputbox "Enter your desired IPv4 default gateway" "${r}" "${c}" "${IPv4gw}" 3>&1 1>&2 2>&3) || \ IPV4_ADDRESS=$(whiptail --backtitle "Calibrating network interface" --title "IPv4 address" --inputbox "Enter your desired IPv4 address" "${r}" "${c}" "${IPV4_ADDRESS}" 3>&1 1>&2 2>&3) || \
# Canceling gateway settings window # Canceling IPv4 settings window
{ ipSettingsCorrect=False; echo -e " ${COL_LIGHT_RED}Cancel was selected, exiting installer${COL_NC}"; exit 1; } { ipSettingsCorrect=False; echo -e " ${COL_LIGHT_RED}Cancel was selected, exiting installer${COL_NC}"; exit 1; }
printf " %b Your static IPv4 gateway: %s\\n" "${INFO}" "${IPv4gw}" printf " %b Your static IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}"
# Give the user a chance to review their settings before moving on # Ask for the gateway
if whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --yesno "Are these settings correct? IPv4gw=$(whiptail --backtitle "Calibrating network interface" --title "IPv4 gateway (router)" --inputbox "Enter your desired IPv4 default gateway" "${r}" "${c}" "${IPv4gw}" 3>&1 1>&2 2>&3) || \
IP address: ${IPV4_ADDRESS} # Canceling gateway settings window
Gateway: ${IPv4gw}" "${r}" "${c}"; then { ipSettingsCorrect=False; echo -e " ${COL_LIGHT_RED}Cancel was selected, exiting installer${COL_NC}"; exit 1; }
# After that's done, the loop ends and we move on printf " %b Your static IPv4 gateway: %s\\n" "${INFO}" "${IPv4gw}"
ipSettingsCorrect=True
else # Give the user a chance to review their settings before moving on
# If the settings are wrong, the loop continues if whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --yesno "Are these settings correct?
ipSettingsCorrect=False IP address: ${IPV4_ADDRESS}
fi Gateway: ${IPv4gw}" "${r}" "${c}"; then
done # After that's done, the loop ends and we move on
# End the if statement for DHCP vs. static ipSettingsCorrect=True
fi else
# If the settings are wrong, the loop continues
ipSettingsCorrect=False
fi
done
setDHCPCD
;;
esac
} }
# Configure networking via dhcpcd # Configure networking via dhcpcd
@@ -845,7 +856,7 @@ valid_ip() {
# Regex matching an optional port (starting with '#') range of 1-65536 # Regex matching an optional port (starting with '#') range of 1-65536
local portelem="(#(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[1-9][0-9]{0,3}|0))?"; local portelem="(#(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[1-9][0-9]{0,3}|0))?";
# Build a full IPv4 regex from the above subexpressions # Build a full IPv4 regex from the above subexpressions
local regex="^${ipv4elem}\.${ipv4elem}\.${ipv4elem}\.${ipv4elem}${portelem}$" local regex="^${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}${portelem}$"
# Evaluate the regex, and return the result # Evaluate the regex, and return the result
[[ $ip =~ ${regex} ]] [[ $ip =~ ${regex} ]]
@@ -902,8 +913,8 @@ setDNS() {
IFS=${OIFS} IFS=${OIFS}
# In a whiptail dialog, show the options # In a whiptail dialog, show the options
DNSchoices=$(whiptail --separate-output --menu "Select Upstream DNS Provider. To use your own, select Custom." "${r}" "${c}" 7 \ DNSchoices=$(whiptail --separate-output --menu "Select Upstream DNS Provider. To use your own, select Custom." "${r}" "${c}" 7 \
"${DNSChooseOptions[@]}" 2>&1 >/dev/tty) || \ "${DNSChooseOptions[@]}" 2>&1 >/dev/tty) || \
# Exit if the user selects "Cancel" # Exit if the user selects "Cancel"
{ printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; } { printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; }
# Depending on the user's choice, set the GLOBAL variables to the IP of the respective provider # Depending on the user's choice, set the GLOBAL variables to the IP of the respective provider
@@ -1117,8 +1128,11 @@ chooseBlocklists() {
appendToListsFile "${choice}" appendToListsFile "${choice}"
done done
# Create an empty adList file with appropriate permissions. # Create an empty adList file with appropriate permissions.
touch "${adlistFile}" if [ ! -f "${adlistFile}" ]; then
chmod 644 "${adlistFile}" install -m 644 /dev/null "${adlistFile}"
else
chmod 644 "${adlistFile}"
fi
} }
# Accept a string parameter, it must be one of the default lists # Accept a string parameter, it must be one of the default lists
@@ -1169,8 +1183,8 @@ version_check_dnsmasq() {
install -D -m 644 -T "${dnsmasq_original_config}" "${dnsmasq_conf}" install -D -m 644 -T "${dnsmasq_original_config}" "${dnsmasq_conf}"
printf "%b %b Restoring default dnsmasq.conf...\\n" "${OVER}" "${TICK}" printf "%b %b Restoring default dnsmasq.conf...\\n" "${OVER}" "${TICK}"
else else
# Otherwise, don't to anything # Otherwise, don't to anything
printf " it is not a Pi-hole file, leaving alone!\\n" printf " it is not a Pi-hole file, leaving alone!\\n"
fi fi
else else
# If a file cannot be found, # If a file cannot be found,
@@ -1205,8 +1219,8 @@ version_check_dnsmasq() {
sed -i '/^server=@DNS2@/d' "${dnsmasq_pihole_01_target}" sed -i '/^server=@DNS2@/d' "${dnsmasq_pihole_01_target}"
fi fi
# Set the cache size # Set the cache size
sed -i "s/@CACHE_SIZE@/$CACHE_SIZE/" "${dnsmasq_pihole_01_target}" sed -i "s/@CACHE_SIZE@/$CACHE_SIZE/" "${dnsmasq_pihole_01_target}"
sed -i 's/^#conf-dir=\/etc\/dnsmasq.d$/conf-dir=\/etc\/dnsmasq.d/' "${dnsmasq_conf}" sed -i 's/^#conf-dir=\/etc\/dnsmasq.d$/conf-dir=\/etc\/dnsmasq.d/' "${dnsmasq_conf}"
@@ -1288,10 +1302,10 @@ installConfigs() {
echo "${DNS_SERVERS}" > "${PI_HOLE_CONFIG_DIR}/dns-servers.conf" echo "${DNS_SERVERS}" > "${PI_HOLE_CONFIG_DIR}/dns-servers.conf"
chmod 644 "${PI_HOLE_CONFIG_DIR}/dns-servers.conf" chmod 644 "${PI_HOLE_CONFIG_DIR}/dns-servers.conf"
# Install empty file if it does not exist # Install template file if it does not exist
if [[ ! -r "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" ]]; then if [[ ! -r "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" ]]; then
install -d -m 0755 ${PI_HOLE_CONFIG_DIR} install -d -m 0755 ${PI_HOLE_CONFIG_DIR}
if ! install -o pihole -m 664 /dev/null "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" &>/dev/null; then if ! install -T -o pihole -m 664 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL.conf" "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" &>/dev/null; then
printf " %bError: Unable to initialize configuration file %s/pihole-FTL.conf\\n" "${COL_LIGHT_RED}" "${PI_HOLE_CONFIG_DIR}" printf " %bError: Unable to initialize configuration file %s/pihole-FTL.conf\\n" "${COL_LIGHT_RED}" "${PI_HOLE_CONFIG_DIR}"
return 1 return 1
fi fi
@@ -1319,11 +1333,12 @@ installConfigs() {
# and copy in the config file Pi-hole needs # and copy in the config file Pi-hole needs
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/advanced/${LIGHTTPD_CFG} "${lighttpdConfig}" install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/advanced/${LIGHTTPD_CFG} "${lighttpdConfig}"
# Make sure the external.conf file exists, as lighttpd v1.4.50 crashes without it # Make sure the external.conf file exists, as lighttpd v1.4.50 crashes without it
touch /etc/lighttpd/external.conf if [ ! -f /etc/lighttpd/external.conf ]; then
chmod 644 /etc/lighttpd/external.conf install -m 644 /dev/null /etc/lighttpd/external.conf
fi
# If there is a custom block page in the html/pihole directory, replace 404 handler in lighttpd config # If there is a custom block page in the html/pihole directory, replace 404 handler in lighttpd config
if [[ -f "${PI_HOLE_BLOCKPAGE_DIR}/custom.php" ]]; then if [[ -f "${PI_HOLE_BLOCKPAGE_DIR}/custom.php" ]]; then
sed -i 's/^\(server\.error-handler-404\s*=\s*\).*$/\1"pihole\/custom\.php"/' "${lighttpdConfig}" sed -i 's/^\(server\.error-handler-404\s*=\s*\).*$/\1"\/pihole\/custom\.php"/' "${lighttpdConfig}"
fi fi
# Make the directories if they do not exist and set the owners # Make the directories if they do not exist and set the owners
mkdir -p /run/lighttpd mkdir -p /run/lighttpd
@@ -1360,7 +1375,12 @@ install_manpage() {
# Testing complete, copy the files & update the man db # Testing complete, copy the files & update the man db
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole.8 /usr/local/share/man/man8/pihole.8 install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole.8 /usr/local/share/man/man8/pihole.8
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole-FTL.8 /usr/local/share/man/man8/pihole-FTL.8 install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole-FTL.8 /usr/local/share/man/man8/pihole-FTL.8
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole-FTL.conf.5 /usr/local/share/man/man5/pihole-FTL.conf.5
# remove previously installed "pihole-FTL.conf.5" man page
if [[ -f "/usr/local/share/man/man5/pihole-FTL.conf.5" ]]; then
rm /usr/local/share/man/man5/pihole-FTL.conf.5
fi
if mandb -q &>/dev/null; then if mandb -q &>/dev/null; then
# Updated successfully # Updated successfully
printf "%b %b man pages installed and database updated\\n" "${OVER}" "${TICK}" printf "%b %b man pages installed and database updated\\n" "${OVER}" "${TICK}"
@@ -1368,7 +1388,7 @@ install_manpage() {
else else
# Something is wrong with the system's man installation, clean up # Something is wrong with the system's man installation, clean up
# our files, (leave everything how we found it). # our files, (leave everything how we found it).
rm /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8 /usr/local/share/man/man5/pihole-FTL.conf.5 rm /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8
printf "%b %b man page db not updated, man pages not installed\\n" "${OVER}" "${CROSS}" printf "%b %b man page db not updated, man pages not installed\\n" "${OVER}" "${CROSS}"
fi fi
} }
@@ -1481,8 +1501,14 @@ update_package_cache() {
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
else else
# Otherwise, show an error and exit # Otherwise, show an error and exit
# In case we used apt-get and apt is also available, we use this as recommendation as we have seen it
# gives more user-friendly (interactive) advice
if [[ ${PKG_MANAGER} == "apt-get" ]] && is_command apt ; then
UPDATE_PKG_CACHE="apt update"
fi
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}" printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
printf " %bError: Unable to update package cache. Please try \"%s\"%b" "${COL_LIGHT_RED}" "sudo ${UPDATE_PKG_CACHE}" "${COL_NC}" printf " %bError: Unable to update package cache. Please try \"%s\"%b\\n" "${COL_LIGHT_RED}" "sudo ${UPDATE_PKG_CACHE}" "${COL_NC}"
return 1 return 1
fi fi
} }
@@ -1548,7 +1574,7 @@ install_dependent_packages() {
# Install Fedora/CentOS packages # Install Fedora/CentOS packages
for i in "$@"; do for i in "$@"; do
# For each package, check if it's already installed (and if so, don't add it to the installArray) # For each package, check if it's already installed (and if so, don't add it to the installArray)
printf " %b Checking for %s..." "${INFO}" "${i}" printf " %b Checking for %s..." "${INFO}" "${i}"
if "${PKG_MANAGER}" -q list installed "${i}" &> /dev/null; then if "${PKG_MANAGER}" -q list installed "${i}" &> /dev/null; then
printf "%b %b Checking for %s\\n" "${OVER}" "${TICK}" "${i}" printf "%b %b Checking for %s\\n" "${OVER}" "${TICK}" "${i}"
@@ -1714,22 +1740,23 @@ finalExports() {
# If the setup variable file exists, # If the setup variable file exists,
if [[ -e "${setupVars}" ]]; then if [[ -e "${setupVars}" ]]; then
# update the variables in the file # update the variables in the file
sed -i.update.bak '/PIHOLE_INTERFACE/d;/IPV4_ADDRESS/d;/IPV6_ADDRESS/d;/PIHOLE_DNS_1\b/d;/PIHOLE_DNS_2\b/d;/QUERY_LOGGING/d;/INSTALL_WEB_SERVER/d;/INSTALL_WEB_INTERFACE/d;/LIGHTTPD_ENABLED/d;/CACHE_SIZE/d;/DNS_FQDN_REQUIRED/d;/DNS_BOGUS_PRIV/d;' "${setupVars}" sed -i.update.bak '/PIHOLE_INTERFACE/d;/IPV4_ADDRESS/d;/IPV6_ADDRESS/d;/PIHOLE_DNS_1\b/d;/PIHOLE_DNS_2\b/d;/QUERY_LOGGING/d;/INSTALL_WEB_SERVER/d;/INSTALL_WEB_INTERFACE/d;/LIGHTTPD_ENABLED/d;/CACHE_SIZE/d;/DNS_FQDN_REQUIRED/d;/DNS_BOGUS_PRIV/d;/DNSMASQ_LISTENING/d;' "${setupVars}"
fi fi
# echo the information to the user # echo the information to the user
{ {
echo "PIHOLE_INTERFACE=${PIHOLE_INTERFACE}" echo "PIHOLE_INTERFACE=${PIHOLE_INTERFACE}"
echo "IPV4_ADDRESS=${IPV4_ADDRESS}" echo "IPV4_ADDRESS=${IPV4_ADDRESS}"
echo "IPV6_ADDRESS=${IPV6_ADDRESS}" echo "IPV6_ADDRESS=${IPV6_ADDRESS}"
echo "PIHOLE_DNS_1=${PIHOLE_DNS_1}" echo "PIHOLE_DNS_1=${PIHOLE_DNS_1}"
echo "PIHOLE_DNS_2=${PIHOLE_DNS_2}" echo "PIHOLE_DNS_2=${PIHOLE_DNS_2}"
echo "QUERY_LOGGING=${QUERY_LOGGING}" echo "QUERY_LOGGING=${QUERY_LOGGING}"
echo "INSTALL_WEB_SERVER=${INSTALL_WEB_SERVER}" echo "INSTALL_WEB_SERVER=${INSTALL_WEB_SERVER}"
echo "INSTALL_WEB_INTERFACE=${INSTALL_WEB_INTERFACE}" echo "INSTALL_WEB_INTERFACE=${INSTALL_WEB_INTERFACE}"
echo "LIGHTTPD_ENABLED=${LIGHTTPD_ENABLED}" echo "LIGHTTPD_ENABLED=${LIGHTTPD_ENABLED}"
echo "CACHE_SIZE=${CACHE_SIZE}" echo "CACHE_SIZE=${CACHE_SIZE}"
echo "DNS_FQDN_REQUIRED=${DNS_FQDN_REQUIRED:-true}" echo "DNS_FQDN_REQUIRED=${DNS_FQDN_REQUIRED:-true}"
echo "DNS_BOGUS_PRIV=${DNS_BOGUS_PRIV:-true}" echo "DNS_BOGUS_PRIV=${DNS_BOGUS_PRIV:-true}"
echo "DNSMASQ_LISTENING=${DNSMASQ_LISTENING:-local}"
}>> "${setupVars}" }>> "${setupVars}"
chmod 644 "${setupVars}" chmod 644 "${setupVars}"
@@ -1845,7 +1872,7 @@ checkSelinux() {
local CURRENT_SELINUX local CURRENT_SELINUX
local SELINUX_ENFORCING=0 local SELINUX_ENFORCING=0
# Check for SELinux configuration file and getenforce command # Check for SELinux configuration file and getenforce command
if [[ -f /etc/selinux/config ]] && command -v getenforce &> /dev/null; then if [[ -f /etc/selinux/config ]] && is_command getenforce; then
# Check the default SELinux mode # Check the default SELinux mode
DEFAULT_SELINUX=$(awk -F= '/^SELINUX=/ {print $2}' /etc/selinux/config) DEFAULT_SELINUX=$(awk -F= '/^SELINUX=/ {print $2}' /etc/selinux/config)
case "${DEFAULT_SELINUX,,}" in case "${DEFAULT_SELINUX,,}" in
@@ -1904,7 +1931,7 @@ displayFinalMessage() {
additional="View the web interface at http://pi.hole/admin or http://${IPV4_ADDRESS%/*}/admin additional="View the web interface at http://pi.hole/admin or http://${IPV4_ADDRESS%/*}/admin
Your Admin Webpage login password is ${pwstring}" Your Admin Webpage login password is ${pwstring}"
fi fi
# Final completion message to user # Final completion message to user
whiptail --msgbox --backtitle "Make it so." --title "Installation Complete!" "Configure your devices to use the Pi-hole as their DNS server using: whiptail --msgbox --backtitle "Make it so." --title "Installation Complete!" "Configure your devices to use the Pi-hole as their DNS server using:
@@ -2077,7 +2104,6 @@ clone_or_update_repos() {
# shellcheck disable=SC2120 # shellcheck disable=SC2120
FTLinstall() { FTLinstall() {
# Local, named variables # Local, named variables
local latesttag
local str="Downloading and Installing FTL" local str="Downloading and Installing FTL"
printf " %b %s..." "${INFO}" "${str}" printf " %b %s..." "${INFO}" "${str}"
@@ -2148,7 +2174,7 @@ FTLinstall() {
disable_dnsmasq() { disable_dnsmasq() {
# dnsmasq can now be stopped and disabled if it exists # dnsmasq can now be stopped and disabled if it exists
if which dnsmasq &> /dev/null; then if is_command dnsmasq; then
if check_service_active "dnsmasq";then if check_service_active "dnsmasq";then
printf " %b FTL can now resolve DNS Queries without dnsmasq running separately\\n" "${INFO}" printf " %b FTL can now resolve DNS Queries without dnsmasq running separately\\n" "${INFO}"
stop_service dnsmasq stop_service dnsmasq
@@ -2261,7 +2287,7 @@ FTLcheckUpdate() {
printf " %b Checking for existing FTL binary...\\n" "${INFO}" printf " %b Checking for existing FTL binary...\\n" "${INFO}"
local ftlLoc local ftlLoc
ftlLoc=$(which pihole-FTL 2>/dev/null) ftlLoc=$(command -v pihole-FTL 2>/dev/null)
local ftlBranch local ftlBranch
@@ -2278,7 +2304,7 @@ FTLcheckUpdate() {
local localSha1 local localSha1
# if dnsmasq exists and is running at this point, force reinstall of FTL Binary # if dnsmasq exists and is running at this point, force reinstall of FTL Binary
if which dnsmasq &> /dev/null; then if is_command dnsmasq; then
if check_service_active "dnsmasq";then if check_service_active "dnsmasq";then
return 0 return 0
fi fi
@@ -2299,7 +2325,7 @@ FTLcheckUpdate() {
# We already have a pihole-FTL binary downloaded. # We already have a pihole-FTL binary downloaded.
# Alt branches don't have a tagged version against them, so just confirm the checksum of the local vs remote to decide whether we download or not # Alt branches don't have a tagged version against them, so just confirm the checksum of the local vs remote to decide whether we download or not
remoteSha1=$(curl -sSL --fail "https://ftl.pi-hole.net/${ftlBranch}/${binary}.sha1" | cut -d ' ' -f 1) remoteSha1=$(curl -sSL --fail "https://ftl.pi-hole.net/${ftlBranch}/${binary}.sha1" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(which pihole-FTL)" | cut -d ' ' -f 1) localSha1=$(sha1sum "$(command -v pihole-FTL)" | cut -d ' ' -f 1)
if [[ "${remoteSha1}" != "${localSha1}" ]]; then if [[ "${remoteSha1}" != "${localSha1}" ]]; then
printf " %b Checksums do not match, downloading from ftl.pi-hole.net.\\n" "${INFO}" printf " %b Checksums do not match, downloading from ftl.pi-hole.net.\\n" "${INFO}"
@@ -2329,7 +2355,7 @@ FTLcheckUpdate() {
printf " %b Latest FTL Binary already installed (%s). Confirming Checksum...\\n" "${INFO}" "${FTLlatesttag}" printf " %b Latest FTL Binary already installed (%s). Confirming Checksum...\\n" "${INFO}" "${FTLlatesttag}"
remoteSha1=$(curl -sSL --fail "https://github.com/pi-hole/FTL/releases/download/${FTLversion%$'\r'}/${binary}.sha1" | cut -d ' ' -f 1) remoteSha1=$(curl -sSL --fail "https://github.com/pi-hole/FTL/releases/download/${FTLversion%$'\r'}/${binary}.sha1" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(which pihole-FTL)" | cut -d ' ' -f 1) localSha1=$(sha1sum "$(command -v pihole-FTL)" | cut -d ' ' -f 1)
if [[ "${remoteSha1}" != "${localSha1}" ]]; then if [[ "${remoteSha1}" != "${localSha1}" ]]; then
printf " %b Corruption detected...\\n" "${INFO}" printf " %b Corruption detected...\\n" "${INFO}"
@@ -2439,7 +2465,7 @@ main() {
#In case of RPM based distro, select the proper PHP version #In case of RPM based distro, select the proper PHP version
if [[ "$PKG_MANAGER" == "yum" || "$PKG_MANAGER" == "dnf" ]] ; then if [[ "$PKG_MANAGER" == "yum" || "$PKG_MANAGER" == "dnf" ]] ; then
select_rpm_php select_rpm_php
fi fi
# Check if SELinux is Enforcing # Check if SELinux is Enforcing
@@ -2469,12 +2495,12 @@ main() {
get_available_interfaces get_available_interfaces
# Find interfaces and let the user choose one # Find interfaces and let the user choose one
chooseInterface chooseInterface
# find IPv4 and IPv6 information of the device
collect_v4andv6_information
# Decide what upstream DNS Servers to use # Decide what upstream DNS Servers to use
setDNS setDNS
# Give the user a choice of blocklists to include in their install. Or not. # Give the user a choice of blocklists to include in their install. Or not.
chooseBlocklists chooseBlocklists
# find IPv4 and IPv6 information of the device
collect_v4andv6_information
# Let the user decide if they want the web interface to be installed automatically # Let the user decide if they want the web interface to be installed automatically
setAdminFlag setAdminFlag
# Let the user decide if they want query logging enabled... # Let the user decide if they want query logging enabled...

View File

@@ -73,19 +73,24 @@ if [[ -r "${piholeDir}/pihole.conf" ]]; then
echo -e " ${COL_LIGHT_RED}Ignoring overrides specified within pihole.conf! ${COL_NC}" echo -e " ${COL_LIGHT_RED}Ignoring overrides specified within pihole.conf! ${COL_NC}"
fi fi
# Generate new sqlite3 file from schema template # Generate new SQLite3 file from schema template
generate_gravity_database() { generate_gravity_database() {
sqlite3 "${1}" < "${gravityDBschema}" if ! pihole-FTL sqlite3 "${gravityDBfile}" < "${gravityDBschema}"; then
echo -e " ${CROSS} Unable to create ${gravityDBfile}"
return 1
fi
chown pihole:pihole "${gravityDBfile}"
chmod g+w "${piholeDir}" "${gravityDBfile}"
} }
# Copy data from old to new database file and swap them # Copy data from old to new database file and swap them
gravity_swap_databases() { gravity_swap_databases() {
local str copyGravity local str copyGravity oldAvail
str="Building tree" str="Building tree"
echo -ne " ${INFO} ${str}..." echo -ne " ${INFO} ${str}..."
# The index is intentionally not UNIQUE as poor quality adlists may contain domains more than once # The index is intentionally not UNIQUE as poor quality adlists may contain domains more than once
output=$( { sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 ) output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -97,22 +102,6 @@ gravity_swap_databases() {
str="Swapping databases" str="Swapping databases"
echo -ne " ${INFO} ${str}..." echo -ne " ${INFO} ${str}..."
# Gravity copying SQL script
copyGravity="$(cat "${gravityDBcopy}")"
if [[ "${gravityDBfile}" != "${gravityDBfile_default}" ]]; then
# Replace default gravity script location by custom location
copyGravity="${copyGravity//"${gravityDBfile_default}"/"${gravityDBfile}"}"
fi
output=$( { sqlite3 "${gravityTEMPfile}" <<< "${copyGravity}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
# Swap databases and remove or conditionally rename old database # Swap databases and remove or conditionally rename old database
# Number of available blocks on disk # Number of available blocks on disk
availableBlocks=$(stat -f --format "%a" "${gravityDIR}") availableBlocks=$(stat -f --format "%a" "${gravityDIR}")
@@ -120,18 +109,24 @@ gravity_swap_databases() {
gravityBlocks=$(stat --format "%b" ${gravityDBfile}) gravityBlocks=$(stat --format "%b" ${gravityDBfile})
# Only keep the old database if available disk space is at least twice the size of the existing gravity.db. # Only keep the old database if available disk space is at least twice the size of the existing gravity.db.
# Better be safe than sorry... # Better be safe than sorry...
oldAvail=false
if [ "${availableBlocks}" -gt "$((gravityBlocks * 2))" ] && [ -f "${gravityDBfile}" ]; then if [ "${availableBlocks}" -gt "$((gravityBlocks * 2))" ] && [ -f "${gravityDBfile}" ]; then
echo -e " ${TICK} The old database remains available." oldAvail=true
mv "${gravityDBfile}" "${gravityOLDfile}" mv "${gravityDBfile}" "${gravityOLDfile}"
else else
rm "${gravityDBfile}" rm "${gravityDBfile}"
fi fi
mv "${gravityTEMPfile}" "${gravityDBfile}" mv "${gravityTEMPfile}" "${gravityDBfile}"
echo -e "${OVER} ${TICK} ${str}"
if $oldAvail; then
echo -e " ${TICK} The old database remains available."
fi
} }
# Update timestamp when the gravity table was last updated successfully # Update timestamp when the gravity table was last updated successfully
update_gravity_timestamp() { update_gravity_timestamp() {
output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -172,7 +167,7 @@ database_table_from_file() {
# Get MAX(id) from domainlist when INSERTing into this table # Get MAX(id) from domainlist when INSERTing into this table
if [[ "${table}" == "domainlist" ]]; then if [[ "${table}" == "domainlist" ]]; then
rowid="$(sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")" rowid="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")"
if [[ -z "$rowid" ]]; then if [[ -z "$rowid" ]]; then
rowid=0 rowid=0
fi fi
@@ -202,7 +197,7 @@ database_table_from_file() {
# Store domains in database table specified by ${table} # Store domains in database table specified by ${table}
# Use printf as .mode and .import need to be on separate lines # Use printf as .mode and .import need to be on separate lines
# see https://unix.stackexchange.com/a/445615/83260 # see https://unix.stackexchange.com/a/445615/83260
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -213,7 +208,7 @@ database_table_from_file() {
# Move source file to backup directory, create directory if not existing # Move source file to backup directory, create directory if not existing
mkdir -p "${backup_path}" mkdir -p "${backup_path}"
mv "${source}" "${backup_file}" 2> /dev/null || \ mv "${source}" "${backup_file}" 2> /dev/null || \
echo -e " ${CROSS} Unable to backup ${source} to ${backup_path}" echo -e " ${CROSS} Unable to backup ${source} to ${backup_path}"
# Delete tmpFile # Delete tmpFile
rm "${tmpFile}" > /dev/null 2>&1 || \ rm "${tmpFile}" > /dev/null 2>&1 || \
@@ -222,7 +217,7 @@ database_table_from_file() {
# Update timestamp of last update of this list. We store this in the "old" database as all values in the new database will later be overwritten # Update timestamp of last update of this list. We store this in the "old" database as all values in the new database will later be overwritten
database_adlist_updated() { database_adlist_updated() {
output=$( { printf ".timeout 30000\\nUPDATE adlist SET date_updated = (cast(strftime('%%s', 'now') as int)) WHERE id = %i;\\n" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\nUPDATE adlist SET date_updated = (cast(strftime('%%s', 'now') as int)) WHERE id = %i;\\n" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -233,7 +228,7 @@ database_adlist_updated() {
# Check if a column with name ${2} exists in gravity table with name ${1} # Check if a column with name ${2} exists in gravity table with name ${1}
gravity_column_exists() { gravity_column_exists() {
output=$( { printf ".timeout 30000\\nSELECT EXISTS(SELECT * FROM pragma_table_info('%s') WHERE name='%s');\\n" "${1}" "${2}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\nSELECT EXISTS(SELECT * FROM pragma_table_info('%s') WHERE name='%s');\\n" "${1}" "${2}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
if [[ "${output}" == "1" ]]; then if [[ "${output}" == "1" ]]; then
return 0 # Bash 0 is success return 0 # Bash 0 is success
fi fi
@@ -248,7 +243,7 @@ database_adlist_number() {
return; return;
fi fi
output=$( { printf ".timeout 30000\\nUPDATE adlist SET number = %i, invalid_domains = %i WHERE id = %i;\\n" "${num_lines}" "${num_invalid}" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\nUPDATE adlist SET number = %i, invalid_domains = %i WHERE id = %i;\\n" "${num_source_lines}" "${num_invalid}" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -264,7 +259,7 @@ database_adlist_status() {
return; return;
fi fi
output=$( { printf ".timeout 30000\\nUPDATE adlist SET status = %i WHERE id = %i;\\n" "${2}" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\nUPDATE adlist SET status = %i WHERE id = %i;\\n" "${2}" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -279,7 +274,10 @@ migrate_to_database() {
if [ ! -e "${gravityDBfile}" ]; then if [ ! -e "${gravityDBfile}" ]; then
# Create new database file - note that this will be created in version 1 # Create new database file - note that this will be created in version 1
echo -e " ${INFO} Creating new gravity database" echo -e " ${INFO} Creating new gravity database"
generate_gravity_database "${gravityDBfile}" if ! generate_gravity_database; then
echo -e " ${CROSS} Error creating new gravity database. Please contact support."
return 1
fi
# Check if gravity database needs to be updated # Check if gravity database needs to be updated
upgrade_gravityDB "${gravityDBfile}" "${piholeDir}" upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
@@ -378,9 +376,9 @@ gravity_DownloadBlocklists() {
fi fi
# Retrieve source URLs from gravity database # Retrieve source URLs from gravity database
# We source only enabled adlists, sqlite3 stores boolean values as 0 (false) or 1 (true) # We source only enabled adlists, SQLite3 stores boolean values as 0 (false) or 1 (true)
mapfile -t sources <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)" mapfile -t sources <<< "$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
mapfile -t sourceIDs <<< "$(sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)" mapfile -t sourceIDs <<< "$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)"
# Parse source domains from $sources # Parse source domains from $sources
mapfile -t sourceDomains <<< "$( mapfile -t sourceDomains <<< "$(
@@ -394,14 +392,12 @@ gravity_DownloadBlocklists() {
)" )"
local str="Pulling blocklist source list into range" local str="Pulling blocklist source list into range"
echo -e "${OVER} ${TICK} ${str}"
if [[ -n "${sources[*]}" ]] && [[ -n "${sourceDomains[*]}" ]]; then if [[ -z "${sources[*]}" ]] || [[ -z "${sourceDomains[*]}" ]]; then
echo -e "${OVER} ${TICK} ${str}"
else
echo -e "${OVER} ${CROSS} ${str}"
echo -e " ${INFO} No source list found, or it is empty" echo -e " ${INFO} No source list found, or it is empty"
echo "" echo ""
return 1 unset sources
fi fi
local url domain agent cmd_ext str target compression local url domain agent cmd_ext str target compression
@@ -411,7 +407,7 @@ gravity_DownloadBlocklists() {
str="Preparing new gravity database" str="Preparing new gravity database"
echo -ne " ${INFO} ${str}..." echo -ne " ${INFO} ${str}..."
rm "${gravityTEMPfile}" > /dev/null 2>&1 rm "${gravityTEMPfile}" > /dev/null 2>&1
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 ) output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -430,9 +426,9 @@ gravity_DownloadBlocklists() {
compression="--compressed" compression="--compressed"
echo -e " ${INFO} Using libz compression\n" echo -e " ${INFO} Using libz compression\n"
else else
compression="" compression=""
echo -e " ${INFO} Libz compression not available\n" echo -e " ${INFO} Libz compression not available\n"
fi fi
# Loop through $sources and download each one # Loop through $sources and download each one
for ((i = 0; i < "${#sources[@]}"; i++)); do for ((i = 0; i < "${#sources[@]}"; i++)); do
url="${sources[$i]}" url="${sources[$i]}"
@@ -462,16 +458,35 @@ gravity_DownloadBlocklists() {
check_url="$( sed -re 's#([^:/]*://)?([^/]+)@#\1\2#' <<< "$url" )" check_url="$( sed -re 's#([^:/]*://)?([^/]+)@#\1\2#' <<< "$url" )"
if [[ "${check_url}" =~ ${regex} ]]; then if [[ "${check_url}" =~ ${regex} ]]; then
echo -e " ${CROSS} Invalid Target" echo -e " ${CROSS} Invalid Target"
else else
gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}" "${sourceIDs[$i]}" "${saveLocation}" "${target}" "${compression}" gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}" "${sourceIDs[$i]}" "${saveLocation}" "${target}" "${compression}"
fi fi
echo "" echo ""
done done
str="Creating new gravity databases"
echo -ne " ${INFO} ${str}..."
# Gravity copying SQL script
copyGravity="$(cat "${gravityDBcopy}")"
if [[ "${gravityDBfile}" != "${gravityDBfile_default}" ]]; then
# Replace default gravity script location by custom location
copyGravity="${copyGravity//"${gravityDBfile_default}"/"${gravityDBfile}"}"
fi
output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" <<< "${copyGravity}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
str="Storing downloaded domains in new gravity database" str="Storing downloaded domains in new gravity database"
echo -ne " ${INFO} ${str}..." echo -ne " ${INFO} ${str}..."
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | sqlite3 "${gravityTEMPfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | pihole-FTL sqlite3 "${gravityTEMPfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
@@ -503,8 +518,9 @@ gravity_DownloadBlocklists() {
gravity_Blackbody=true gravity_Blackbody=true
} }
total_num=0 # num_target_lines does increase for every correctly added domain in pareseList()
num_lines=0 num_target_lines=0
num_source_lines=0
num_invalid=0 num_invalid=0
parseList() { parseList() {
local adlistID="${1}" src="${2}" target="${3}" incorrect_lines local adlistID="${1}" src="${2}" target="${3}" incorrect_lines
@@ -516,18 +532,20 @@ parseList() {
# Find (up to) five domains containing invalid characters (see above) # Find (up to) five domains containing invalid characters (see above)
incorrect_lines="$(sed -e "/[^a-zA-Z0-9.\_-]/!d" "${src}" | head -n 5)" incorrect_lines="$(sed -e "/[^a-zA-Z0-9.\_-]/!d" "${src}" | head -n 5)"
local num_target_lines num_correct_lines num_invalid local num_target_lines_new num_correct_lines
# Get number of lines in source file # Get number of lines in source file
num_lines="$(grep -c "^" "${src}")" num_source_lines="$(grep -c "^" "${src}")"
# Get number of lines in destination file # Get the new number of lines in destination file
num_target_lines="$(grep -c "^" "${target}")" num_target_lines_new="$(grep -c "^" "${target}")"
num_correct_lines="$(( num_target_lines-total_num ))" # Number of new correctly added lines
total_num="$num_target_lines" num_correct_lines="$(( num_target_lines_new-num_target_lines ))"
num_invalid="$(( num_lines-num_correct_lines ))" # Upate number of lines in target file
num_target_lines="$num_target_lines_new"
num_invalid="$(( num_source_lines-num_correct_lines ))"
if [[ "${num_invalid}" -eq 0 ]]; then if [[ "${num_invalid}" -eq 0 ]]; then
echo " ${INFO} Analyzed ${num_lines} domains" echo " ${INFO} Analyzed ${num_source_lines} domains"
else else
echo " ${INFO} Analyzed ${num_lines} domains, ${num_invalid} domains invalid!" echo " ${INFO} Analyzed ${num_source_lines} domains, ${num_invalid} domains invalid!"
fi fi
# Display sample of invalid lines if we found some # Display sample of invalid lines if we found some
@@ -583,28 +601,32 @@ gravity_DownloadBlocklistFromUrl() {
blocked=false blocked=false
case $BLOCKINGMODE in case $BLOCKINGMODE in
"IP-NODATA-AAAA"|"IP") "IP-NODATA-AAAA"|"IP")
# Get IP address of this domain # Get IP address of this domain
ip="$(dig "${domain}" +short)" ip="$(dig "${domain}" +short)"
# Check if this IP matches any IP of the system # Check if this IP matches any IP of the system
if [[ -n "${ip}" && $(grep -Ec "inet(|6) ${ip}" <<< "$(ip a)") -gt 0 ]]; then if [[ -n "${ip}" && $(grep -Ec "inet(|6) ${ip}" <<< "$(ip a)") -gt 0 ]]; then
blocked=true blocked=true
fi;; fi;;
"NXDOMAIN") "NXDOMAIN")
if [[ $(dig "${domain}" | grep "NXDOMAIN" -c) -ge 1 ]]; then if [[ $(dig "${domain}" | grep "NXDOMAIN" -c) -ge 1 ]]; then
blocked=true blocked=true
fi;; fi;;
"NODATA")
if [[ $(dig "${domain}" | grep "NOERROR" -c) -ge 1 ]] && [[ -z $(dig +short "${domain}") ]]; then
blocked=true
fi;;
"NULL"|*) "NULL"|*)
if [[ $(dig "${domain}" +short | grep "0.0.0.0" -c) -ge 1 ]]; then if [[ $(dig "${domain}" +short | grep "0.0.0.0" -c) -ge 1 ]]; then
blocked=true blocked=true
fi;; fi;;
esac esac
if [[ "${blocked}" == true ]]; then if [[ "${blocked}" == true ]]; then
printf -v ip_addr "%s" "${PIHOLE_DNS_1%#*}" printf -v ip_addr "%s" "${PIHOLE_DNS_1%#*}"
if [[ ${PIHOLE_DNS_1} != *"#"* ]]; then if [[ ${PIHOLE_DNS_1} != *"#"* ]]; then
port=53 port=53
else else
printf -v port "%s" "${PIHOLE_DNS_1#*#}" printf -v port "%s" "${PIHOLE_DNS_1#*#}"
fi fi
ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}" | tail -1) ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}" | tail -1)
if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then
@@ -623,11 +645,11 @@ gravity_DownloadBlocklistFromUrl() {
case $url in case $url in
# Did we "download" a local file? # Did we "download" a local file?
"file"*) "file"*)
if [[ -s "${patternBuffer}" ]]; then if [[ -s "${patternBuffer}" ]]; then
echo -e "${OVER} ${TICK} ${str} Retrieval successful"; success=true echo -e "${OVER} ${TICK} ${str} Retrieval successful"; success=true
else else
echo -e "${OVER} ${CROSS} ${str} Not found / empty list" echo -e "${OVER} ${CROSS} ${str} Not found / empty list"
fi;; fi;;
# Did we "download" a remote file? # Did we "download" a remote file?
*) *)
# Determine "Status:" output based on HTTP response # Determine "Status:" output based on HTTP response
@@ -686,7 +708,7 @@ gravity_DownloadBlocklistFromUrl() {
else else
echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}" echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}"
# Manually reset these two numbers because we do not call parseList here # Manually reset these two numbers because we do not call parseList here
num_lines=0 num_source_lines=0
num_invalid=0 num_invalid=0
database_adlist_number "${adlistID}" database_adlist_number "${adlistID}"
database_adlist_status "${adlistID}" "4" database_adlist_status "${adlistID}" "4"
@@ -769,12 +791,12 @@ gravity_Table_Count() {
local table="${1}" local table="${1}"
local str="${2}" local str="${2}"
local num local num
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")" num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")"
if [[ "${table}" == "vw_gravity" ]]; then if [[ "${table}" == "vw_gravity" ]]; then
local unique local unique
unique="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")" unique="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")"
echo -e " ${INFO} Number of ${str}: ${num} (${COL_BOLD}${unique} unique domains${COL_NC})" echo -e " ${INFO} Number of ${str}: ${num} (${COL_BOLD}${unique} unique domains${COL_NC})"
sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});" pihole-FTL sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});"
else else
echo -e " ${INFO} Number of ${str}: ${num}" echo -e " ${INFO} Number of ${str}: ${num}"
fi fi
@@ -845,6 +867,49 @@ gravity_Cleanup() {
fi fi
} }
database_recovery() {
local result
local str="Checking integrity of existing gravity database"
local option="${1}"
echo -ne " ${INFO} ${str}..."
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" "PRAGMA integrity_check" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - no errors found"
str="Checking foreign keys of existing gravity database"
echo -ne " ${INFO} ${str}..."
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" "PRAGMA foreign_key_check" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - no errors found"
if [[ "${option}" != "force" ]]; then
return
fi
else
echo -e "${OVER} ${CROSS} ${str} - errors found:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
fi
else
echo -e "${OVER} ${CROSS} ${str} - errors found:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
fi
str="Trying to recover existing gravity database"
echo -ne " ${INFO} ${str}..."
# We have to remove any possibly existing recovery database or this will fail
rm -f "${gravityDBfile}.recovered" > /dev/null 2>&1
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" ".recover" | pihole-FTL sqlite3 "${gravityDBfile}.recovered" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - success"
mv "${gravityDBfile}" "${gravityDBfile}.old"
mv "${gravityDBfile}.recovered" "${gravityDBfile}"
echo -ne " ${INFO} ${gravityDBfile} has been recovered"
echo -ne " ${INFO} The old ${gravityDBfile} has been moved to ${gravityDBfile}.old"
else
echo -e "${OVER} ${CROSS} ${str} - the following errors happened:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
echo -e " ${CROSS} Recovery failed. Try \"pihole -r recreate\" instead."
exit 1
fi
echo ""
}
helpFunc() { helpFunc() {
echo "Usage: pihole -g echo "Usage: pihole -g
Update domains from blocklists specified in adlists.list Update domains from blocklists specified in adlists.list
@@ -855,10 +920,37 @@ Options:
exit 0 exit 0
} }
repairSelector() {
case "$1" in
"recover") recover_database=true;;
"recreate") recreate_database=true;;
*) echo "Usage: pihole -g -r {recover,recreate}
Attempt to repair gravity database
Available options:
pihole -g -r recover Try to recover a damaged gravity database file.
Pi-hole tries to restore as much as possible
from a corrupted gravity database.
pihole -g -r recover force Pi-hole will run the recovery process even when
no damage is detected. This option is meant to be
a last resort. Recovery is a fragile task
consuming a lot of resources and shouldn't be
performed unnecessarily.
pihole -g -r recreate Create a new gravity database file from scratch.
This will remove your existing gravity database
and create a new file from scratch. If you still
have the migration backup created when migrating
to Pi-hole v5.0, Pi-hole will import these files."
exit 0;;
esac
}
for var in "$@"; do for var in "$@"; do
case "${var}" in case "${var}" in
"-f" | "--force" ) forceDelete=true;; "-f" | "--force" ) forceDelete=true;;
"-r" | "--recreate" ) recreate_database=true;; "-r" | "--repair" ) repairSelector "$3";;
"-h" | "--help" ) helpFunc;; "-h" | "--help" ) helpFunc;;
esac esac
done done
@@ -872,7 +964,7 @@ fi
gravity_Trap gravity_Trap
if [[ "${recreate_database:-}" == true ]]; then if [[ "${recreate_database:-}" == true ]]; then
str="Restoring from migration backup" str="Recreating gravity database from migration backup"
echo -ne "${INFO} ${str}..." echo -ne "${INFO} ${str}..."
rm "${gravityDBfile}" rm "${gravityDBfile}"
pushd "${piholeDir}" > /dev/null || exit pushd "${piholeDir}" > /dev/null || exit
@@ -881,8 +973,15 @@ if [[ "${recreate_database:-}" == true ]]; then
echo -e "${OVER} ${TICK} ${str}" echo -e "${OVER} ${TICK} ${str}"
fi fi
if [[ "${recover_database:-}" == true ]]; then
database_recovery "$4"
fi
# Move possibly existing legacy files to the gravity database # Move possibly existing legacy files to the gravity database
migrate_to_database if ! migrate_to_database; then
echo -e " ${CROSS} Unable to migrate to database. Please contact support."
exit 1
fi
if [[ "${forceDelete:-}" == true ]]; then if [[ "${forceDelete:-}" == true ]]; then
str="Deleting existing list cache" str="Deleting existing list cache"
@@ -893,14 +992,21 @@ if [[ "${forceDelete:-}" == true ]]; then
fi fi
# Gravity downloads blocklists next # Gravity downloads blocklists next
gravity_CheckDNSResolutionAvailable if ! gravity_CheckDNSResolutionAvailable; then
echo -e " ${CROSS} Can not complete gravity update, no DNS is available. Please contact support."
exit 1
fi
gravity_DownloadBlocklists gravity_DownloadBlocklists
# Create local.list # Create local.list
gravity_generateLocalList gravity_generateLocalList
# Migrate rest of the data from old to new database # Migrate rest of the data from old to new database
gravity_swap_databases if ! gravity_swap_databases; then
echo -e " ${CROSS} Unable to create database. Please contact support."
exit 1
fi
# Update gravity timestamp # Update gravity timestamp
update_gravity_timestamp update_gravity_timestamp

View File

@@ -144,7 +144,9 @@ Command line arguments can be arbitrarily combined, e.g:
Start ftl in foreground with more verbose logging, process everything and shutdown immediately Start ftl in foreground with more verbose logging, process everything and shutdown immediately
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBpihole\fR(8), \fBpihole-FTL.conf\fR(5) \fBpihole\fR(8)
.br
\fBFor FTL's config options please see https://docs.pi-hole.net/ftldns/configfile/\fR
.br .br
.SH "COLOPHON" .SH "COLOPHON"

View File

@@ -1,313 +0,0 @@
.TH "pihole-FTL.conf" "5" "pihole-FTL.conf" "pihole-FTL.conf" "November 2020"
.SH "NAME"
pihole-FTL.conf - FTL's config file
.br
.SH "DESCRIPTION"
/etc/pihole/pihole-FTL.conf will be read by \fBpihole-FTL(8)\fR on startup.
.br
For each setting the option shown first is the default.
.br
\fBBLOCKINGMODE=IP|IP-AAAA-NODATA|NODATA|NXDOMAIN|NULL\fR
.br
How should FTL reply to blocked queries?
IP - Pi-hole's IPs for blocked domains
IP-AAAA-NODATA - Pi-hole's IP + NODATA-IPv6 for blocked domains
NODATA - Using NODATA for blocked domains
NXDOMAIN - NXDOMAIN for blocked domains
NULL - Null IPs for blocked domains
.br
\fBCNAME_DEEP_INSPECT=true|false\fR
.br
Use this option to disable deep CNAME inspection. This might be beneficial for very low-end devices.
.br
\fBBLOCK_ESNI=true|false\fR
.br
Block requests to _esni.* sub-domains.
.br
\fBMAXLOGAGE=24.0\fR
.br
Up to how many hours of queries should be imported from the database and logs?
.br
Maximum is 744 (31 days)
.br
\fBPRIVACYLEVEL=0|1|2|3|4\fR
.br
Privacy level used to collect Pi-hole statistics.
.br
0 - show everything
.br
1 - hide domains
.br
2 - hide domains and clients
.br
3 - anonymous mode (hide everything)
.br
4 - disable all statistics
.br
\fBIGNORE_LOCALHOST=no|yes\fR
.br
Should FTL ignore queries coming from the local machine?
.br
\fBAAAA_QUERY_ANALYSIS=yes|no\fR
.br
Should FTL analyze AAAA queries?
.br
\fBANALYZE_ONLY_A_AND_AAAA=false|true\fR
.br
Should FTL only analyze A and AAAA queries?
.br
\fBSOCKET_LISTENING=localonly|all\fR
.br
Listen only for local socket connections on the API port or permit all connections.
.br
\fBFTLPORT=4711\fR
.br
On which port should FTL be listening?
.br
\fBRESOLVE_IPV6=yes|no\fR
.br
Should FTL try to resolve IPv6 addresses to hostnames?
.br
\fBRESOLVE_IPV4=yes|no\fR
.br
Should FTL try to resolve IPv4 addresses to hostnames?
.br
\fBDELAY_STARTUP=0\fR
.br
Time in seconds (between 0 and 300) to delay FTL startup.
.br
\fBNICE=-10\fR
.br
Set the niceness of the Pi-hole FTL process.
.br
Can be disabled altogether by setting a value of -999.
.br
\fBNAMES_FROM_NETDB=true|false\fR
.br
Control whether FTL should use a fallback option and try to obtain client names from checking the network table.
.br
E.g. IPv6 clients without a hostname will be compared via MAC address to known clients.
.br
\fB\fBREFRESH_HOSTNAMES=IPV4|ALL|NONE\fR
.br
Change how (and if) hourly PTR requests are made to check for changes in client and upstream server hostnames:
.br
IPV4 - Do the hourly PTR lookups only for IPv4 addresses resolving issues in networks with many short-lived PE IPv6 addresses.
.br
ALL - Do the hourly PTR lookups for all addresses. This can create a lot of PTR queries in networks with many IPv6 addresses.
.br
NONE - Don't do hourly PTR lookups. Look up hostnames once (when first seeing a client) and never again. Future hostname changes may be missed.
.br
\fBMAXNETAGE=365\fR
.br
IP addresses (and associated host names) older than the specified number of days are removed.
.br
This avoids dead entries in the network overview table.
.br
\fBEDNS0_ECS=true|false\fR
.br
Should we overwrite the query source when client information is provided through EDNS0 client subnet (ECS) information?
.br
\fBPARSE_ARP_CACHE=true|false\fR
.br
Parse ARP cache to fill network overview table.
.br
\fBDBIMPORT=yes|no\fR
.br
Should FTL load information from the database on startup to be aware of the most recent history?
.br
\fBMAXDBDAYS=365\fR
.br
How long should queries be stored in the database? Setting this to 0 disables the database
.br
\fBDBINTERVAL=1.0\fR
.br
How often do we store queries in FTL's database [minutes]?
.br
Accepts value between 0.1 (6 sec) and 1440 (1 day)
.br
\fBDBFILE=/etc/pihole/pihole-FTL.db\fR
.br
Specify path and filename of FTL's SQLite long-term database.
.br
Setting this to DBFILE= disables the database altogether
.br
\fBLOGFILE=/var/log/pihole-FTL.log\fR
.br
The location of FTL's log file.
.br
\fBPIDFILE=/run/pihole-FTL.pid\fR
.br
The file which contains the PID of FTL's main process.
.br
\fBPORTFILE=/run/pihole-FTL.port\fR
.br
Specify path and filename where the FTL process will write its API port number.
.br
\fBSOCKETFILE=/run/pihole/FTL.sock\fR
.br
The file containing the socket FTL's API is listening on.
.br
\fBSETUPVARSFILE=/etc/pihole/setupVars.conf\fR
.br
The config file of Pi-hole containing, e.g., the current blocking status (do not change).
.br
\fBMACVENDORDB=/etc/pihole/macvendor.db\fR
.br
The database containing MAC -> Vendor information for the network table.
.br
\fBGRAVITYDB=/etc/pihole/gravity.db\fR
.br
Specify path and filename of FTL's SQLite3 gravity database. This database contains all domains relevant for Pi-hole's DNS blocking.
.br
\fBDEBUG_ALL=false|true\fR
.br
Enable all debug flags. If this is set to true, all other debug config options are ignored.
.br
\fBDEBUG_DATABASE=false|true\fR
.br
Print debugging information about database actions such as SQL statements and performance.
.br
\fBDEBUG_NETWORKING=false|true\fR
.br
Prints a list of the detected network interfaces on the startup of FTL.
.br
\fBDEBUG_LOCKS=false|true\fR
.br
Print information about shared memory locks.
.br
Messages will be generated when waiting, obtaining, and releasing a lock.
.br
\fBDEBUG_QUERIES=false|true\fR
.br
Print extensive DNS query information (domains, types, replies, etc.).
.br
\fBDEBUG_FLAGS=false|true\fR
.br
Print flags of queries received by the DNS hooks.
.br
Only effective when \fBDEBUG_QUERIES\fR is enabled as well.
\fBDEBUG_SHMEM=false|true\fR
.br
Print information about shared memory buffers.
.br
Messages are either about creating or enlarging shmem objects or string injections.
.br
\fBDEBUG_GC=false|true\fR
.br
Print information about garbage collection (GC):
.br
What is to be removed, how many have been removed and how long did GC take.
.br
\fBDEBUG_ARP=false|true\fR
.br
Print information about ARP table processing:
.br
How long did parsing take, whether read MAC addresses are valid, and if the macvendor.db file exists.
.br
\fBDEBUG_REGEX=false|true\fR
.br
Controls if FTL should print extended details about regex matching.
.br
\fBDEBUG_API=false|true\fR
.br
Print extra debugging information during telnet API calls.
.br
Currently only used to send extra information when getting all queries.
.br
\fBDEBUG_OVERTIME=false|true\fR
.br
Print information about overTime memory operations, such as initializing or moving overTime slots.
.br
\fBDEBUG_EXTBLOCKED=false|true\fR
.br
Print information about why FTL decided that certain queries were recognized as being externally blocked.
.br
\fBDEBUG_CAPS=false|true\fR
.br
Print information about POSIX capabilities granted to the FTL process.
.br
The current capabilities are printed on receipt of SIGHUP i.e. after executing `killall -HUP pihole-FTL`.
.br
\fBDEBUG_DNSMASQ_LINES=false|true\fR
.br
Print file and line causing a dnsmasq event into FTL's log files.
.br
This is handy to implement additional hooks missing from FTL.
.br
\fBDEBUG_VECTORS=false|true\fR
.br
FTL uses dynamically allocated vectors for various tasks.
.br
This config option enables extensive debugging information such as information about allocation, referencing, deletion, and appending.
.br
\fBDEBUG_RESOLVER=false|true\fR
.br
Extensive information about hostname resolution like which DNS servers are used in the first and second hostname resolving tries.
.br
.SH "SEE ALSO"
\fBpihole\fR(8), \fBpihole-FTL\fR(8)
.br
.SH "COLOPHON"
Pi-hole : The Faster-Than-Light (FTL) Engine is a lightweight, purpose-built daemon used to provide statistics needed for the Pi-hole Web Interface, and its API can be easily integrated into your own projects. Although it is an optional component of the Pi-hole ecosystem, it will be installed by default to provide statistics. As the name implies, FTL does its work \fIvery quickly\fR!
.br
Get sucked into the latest news and community activity by entering Pi-hole's orbit. Information about Pi-hole, and the latest version of the software can be found at https://pi-hole.net
.br

66
pihole
View File

@@ -21,6 +21,9 @@ readonly FTL_PID_FILE="/run/pihole-FTL.pid"
readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE" readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE"
source "${colfile}" source "${colfile}"
readonly utilsfile="${PI_HOLE_SCRIPT_DIR}/utils.sh"
source "${utilsfile}"
webpageFunc() { webpageFunc() {
source "${PI_HOLE_SCRIPT_DIR}/webpage.sh" source "${PI_HOLE_SCRIPT_DIR}/webpage.sh"
main "$@" main "$@"
@@ -71,8 +74,7 @@ reconfigurePiholeFunc() {
} }
updateGravityFunc() { updateGravityFunc() {
"${PI_HOLE_SCRIPT_DIR}"/gravity.sh "$@" exec "${PI_HOLE_SCRIPT_DIR}"/gravity.sh "$@"
exit $?
} }
queryFunc() { queryFunc() {
@@ -95,8 +97,7 @@ uninstallFunc() {
versionFunc() { versionFunc() {
shift shift
"${PI_HOLE_SCRIPT_DIR}"/version.sh "$@" exec "${PI_HOLE_SCRIPT_DIR}"/version.sh "$@"
exit 0
} }
# Get PID of main pihole-FTL process # Get PID of main pihole-FTL process
@@ -225,8 +226,7 @@ Time:
fi fi
local str="Pi-hole Disabled" local str="Pi-hole Disabled"
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}" addOrEditKeyValPair "BLOCKING_ENABLED" "false" "${setupVars}"
echo "BLOCKING_ENABLED=false" >> "${setupVars}"
fi fi
else else
# Enable Pi-hole # Enable Pi-hole
@@ -238,8 +238,7 @@ Time:
echo -e " ${INFO} Enabling blocking" echo -e " ${INFO} Enabling blocking"
local str="Pi-hole Enabled" local str="Pi-hole Enabled"
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}" addOrEditKeyValPair "BLOCKING_ENABLED" "true" "${setupVars}"
echo "BLOCKING_ENABLED=true" >> "${setupVars}"
fi fi
restartDNS reload-lists restartDNS reload-lists
@@ -262,7 +261,7 @@ Options:
elif [[ "${1}" == "off" ]]; then elif [[ "${1}" == "off" ]]; then
# Disable logging # Disable logging
sed -i 's/^log-queries/#log-queries/' /etc/dnsmasq.d/01-pihole.conf sed -i 's/^log-queries/#log-queries/' /etc/dnsmasq.d/01-pihole.conf
sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf addOrEditKeyValPair "QUERY_LOGGING" "false" "${setupVars}"
if [[ "${2}" != "noflush" ]]; then if [[ "${2}" != "noflush" ]]; then
# Flush logs # Flush logs
"${PI_HOLE_BIN_DIR}"/pihole -f "${PI_HOLE_BIN_DIR}"/pihole -f
@@ -272,7 +271,7 @@ Options:
elif [[ "${1}" == "on" ]]; then elif [[ "${1}" == "on" ]]; then
# Enable logging # Enable logging
sed -i 's/^#log-queries/log-queries/' /etc/dnsmasq.d/01-pihole.conf sed -i 's/^#log-queries/log-queries/' /etc/dnsmasq.d/01-pihole.conf
sed -i 's/^QUERY_LOGGING=false/QUERY_LOGGING=true/' /etc/pihole/setupVars.conf addOrEditKeyValPair "QUERY_LOGGING" "true" "${setupVars}"
echo -e " ${INFO} Enabling logging..." echo -e " ${INFO} Enabling logging..."
local str="Logging has been enabled!" local str="Logging has been enabled!"
else else
@@ -285,27 +284,29 @@ Options:
} }
analyze_ports() { analyze_ports() {
local lv4 lv6 port=${1}
# FTL is listening at least on at least one port when this # FTL is listening at least on at least one port when this
# function is getting called # function is getting called
echo -e " ${TICK} DNS service is listening"
# Check individual address family/protocol combinations # Check individual address family/protocol combinations
# For a healthy Pi-hole, they should all be up (nothing printed) # For a healthy Pi-hole, they should all be up (nothing printed)
if grep -q "IPv4.*UDP" <<< "${1}"; then lv4="$(ss --ipv4 --listening --numeric --tcp --udp src :${port})"
if grep -q "udp " <<< "${lv4}"; then
echo -e " ${TICK} UDP (IPv4)" echo -e " ${TICK} UDP (IPv4)"
else else
echo -e " ${CROSS} UDP (IPv4)" echo -e " ${CROSS} UDP (IPv4)"
fi fi
if grep -q "IPv4.*TCP" <<< "${1}"; then if grep -q "tcp " <<< "${lv4}"; then
echo -e " ${TICK} TCP (IPv4)" echo -e " ${TICK} TCP (IPv4)"
else else
echo -e " ${CROSS} TCP (IPv4)" echo -e " ${CROSS} TCP (IPv4)"
fi fi
if grep -q "IPv6.*UDP" <<< "${1}"; then lv6="$(ss --ipv6 --listening --numeric --tcp --udp src :${port})"
if grep -q "udp " <<< "${lv6}"; then
echo -e " ${TICK} UDP (IPv6)" echo -e " ${TICK} UDP (IPv6)"
else else
echo -e " ${CROSS} UDP (IPv6)" echo -e " ${CROSS} UDP (IPv6)"
fi fi
if grep -q "IPv6.*TCP" <<< "${1}"; then if grep -q "tcp " <<< "${lv6}"; then
echo -e " ${TICK} TCP (IPv6)" echo -e " ${TICK} TCP (IPv6)"
else else
echo -e " ${CROSS} TCP (IPv6)" echo -e " ${CROSS} TCP (IPv6)"
@@ -314,19 +315,31 @@ analyze_ports() {
} }
statusFunc() { statusFunc() {
# Determine if there is a pihole service is listening on port 53 # Determine if there is pihole-FTL service is listening
local listening local listening pid port
listening="$(lsof -Pni:53)"
if grep -q "pihole" <<< "${listening}"; then pid="$(getFTLPID)"
if [[ "${1}" != "web" ]]; then if [[ "$pid" -eq "-1" ]]; then
analyze_ports "${listening}"
fi
else
case "${1}" in case "${1}" in
"web") echo "-1";; "web") echo "-1";;
*) echo -e " ${CROSS} DNS service is NOT listening";; *) echo -e " ${CROSS} DNS service is NOT running";;
esac esac
return 0 return 0
else
#get the port pihole-FTL is listening on by using FTL's telnet API
port="$(echo ">dns-port >quit" | nc 127.0.0.1 4711)"
if [[ "${port}" == "0" ]]; then
case "${1}" in
"web") echo "-1";;
*) echo -e " ${CROSS} DNS service is NOT listening";;
esac
return 0
else
if [[ "${1}" != "web" ]]; then
echo -e " ${TICK} FTL is listening on port ${port}"
analyze_ports "${port}"
fi
fi
fi fi
# Determine if Pi-hole's blocking is enabled # Determine if Pi-hole's blocking is enabled
@@ -339,18 +352,19 @@ statusFunc() {
elif grep -q "BLOCKING_ENABLED=true" /etc/pihole/setupVars.conf; then elif grep -q "BLOCKING_ENABLED=true" /etc/pihole/setupVars.conf; then
# Configs are set # Configs are set
case "${1}" in case "${1}" in
"web") echo 1;; "web") echo "$port";;
*) echo -e " ${TICK} Pi-hole blocking is enabled";; *) echo -e " ${TICK} Pi-hole blocking is enabled";;
esac esac
else else
# No configs were found # No configs were found
case "${1}" in case "${1}" in
"web") echo 99;; "web") echo -2;;
*) echo -e " ${INFO} Pi-hole blocking will be enabled";; *) echo -e " ${INFO} Pi-hole blocking will be enabled";;
esac esac
# Enable blocking # Enable blocking
"${PI_HOLE_BIN_DIR}"/pihole enable "${PI_HOLE_BIN_DIR}"/pihole enable
fi fi
} }
tailFunc() { tailFunc() {

View File

@@ -18,8 +18,8 @@ py.test -vv -n auto -m "build_stage"
py.test -vv -n auto -m "not build_stage" py.test -vv -n auto -m "not build_stage"
``` ```
The build_stage tests have to run first to create the docker images, followed by the actual tests which utilize said images. Unless you're changing your dockerfiles you shouldn't have to run the build_stage every time - but it's a good idea to rebuild at least once a day in case the base Docker images or packages change. The build_stage tests have to run first to create the docker images, followed by the actual tests which utilize said images. Unless you're changing your dockerfiles you shouldn't have to run the build_stage every time - but it's a good idea to rebuild at least once a day in case the base Docker images or packages change.
# How do I debug python? # How do I debug python?
Highly recommended: Setup PyCharm on a **Docker enabled** machine. Having a python debugger like PyCharm changes your life if you've never used it :) Highly recommended: Setup PyCharm on a **Docker enabled** machine. Having a python debugger like PyCharm changes your life if you've never used it :)

View File

@@ -1,4 +1,5 @@
FROM centos:7 FROM centos:7
RUN yum install -y git
ENV GITDIR /etc/.pihole ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole ENV SCRIPTDIR /opt/pihole

View File

@@ -1,4 +1,5 @@
FROM centos:8 FROM quay.io/centos/centos:stream8
RUN yum install -y git
ENV GITDIR /etc/.pihole ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole ENV SCRIPTDIR /opt/pihole

View File

@@ -1,4 +1,5 @@
FROM fedora:33 FROM fedora:33
RUN dnf install -y git
ENV GITDIR /etc/.pihole ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole ENV SCRIPTDIR /opt/pihole

View File

@@ -1,4 +1,5 @@
FROM fedora:34 FROM fedora:34
RUN dnf install -y git
ENV GITDIR /etc/.pihole ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole ENV SCRIPTDIR /opt/pihole

View File

@@ -1,10 +1,9 @@
import pytest import pytest
import testinfra import testinfra
import testinfra.backend.docker
import subprocess
from textwrap import dedent from textwrap import dedent
check_output = testinfra.get_backend(
"local://"
).get_module("Command").check_output
SETUPVARS = { SETUPVARS = {
'PIHOLE_INTERFACE': 'eth99', 'PIHOLE_INTERFACE': 'eth99',
@@ -12,85 +11,42 @@ SETUPVARS = {
'PIHOLE_DNS_2': '4.2.2.2' 'PIHOLE_DNS_2': '4.2.2.2'
} }
IMAGE = 'pytest_pihole:test_container'
tick_box = "[\x1b[1;32m\u2713\x1b[0m]" tick_box = "[\x1b[1;32m\u2713\x1b[0m]"
cross_box = "[\x1b[1;31m\u2717\x1b[0m]" cross_box = "[\x1b[1;31m\u2717\x1b[0m]"
info_box = "[i]" info_box = "[i]"
@pytest.fixture # Monkeypatch sh to bash, if they ever support non hard code /bin/sh this can go away
def Pihole(Docker): # https://github.com/pytest-dev/pytest-testinfra/blob/master/testinfra/backend/docker.py
''' def run_bash(self, command, *args, **kwargs):
used to contain some script stubbing, now pretty much an alias. cmd = self.get_command(command, *args)
Also provides bash as the default run function shell if self.user is not None:
''' out = self.run_local(
def run_bash(self, command, *args, **kwargs): "docker exec -u %s %s /bin/bash -c %s", self.user, self.name, cmd
cmd = self.get_command(command, *args) )
if self.user is not None: else:
out = self.run_local( out = self.run_local("docker exec %s /bin/bash -c %s", self.name, cmd)
"docker exec -u %s %s /bin/bash -c %s", out.command = self.encode(cmd)
self.user, self.name, cmd) return out
else:
out = self.run_local(
"docker exec %s /bin/bash -c %s", self.name, cmd)
out.command = self.encode(cmd)
return out
funcType = type(Docker.run)
Docker.run = funcType(run_bash, Docker) testinfra.backend.docker.DockerBackend.run = run_bash
return Docker
@pytest.fixture @pytest.fixture
def Docker(request, args, image, cmd): def host():
''' # run a container
combine our fixtures into a docker run command and setup finalizer to docker_id = subprocess.check_output(
cleanup ['docker', 'run', '-t', '-d', '--cap-add=ALL', IMAGE]).decode().strip()
'''
assert 'docker' in check_output('id'), "Are you in the docker group?"
docker_run = "docker run {} {} {}".format(args, image, cmd)
docker_id = check_output(docker_run)
def teardown(): # return a testinfra connection to the container
check_output("docker rm -f %s", docker_id) docker_host = testinfra.get_host("docker://" + docker_id)
request.addfinalizer(teardown)
docker_container = testinfra.get_backend("docker://" + docker_id) yield docker_host
docker_container.id = docker_id # at the end of the test suite, destroy the container
return docker_container subprocess.check_call(['docker', 'rm', '-f', docker_id])
@pytest.fixture
def args(request):
'''
-t became required when tput began being used
'''
return '-t -d'
@pytest.fixture(params=[
'test_container'
])
def tag(request):
'''
consumed by image to make the test matrix
'''
return request.param
@pytest.fixture()
def image(request, tag):
'''
built by test_000_build_containers.py
'''
return 'pytest_pihole:{}'.format(tag)
@pytest.fixture()
def cmd(request):
'''
default to doing nothing by tailing null, but don't exit
'''
return 'tail -f /dev/null'
# Helper functions # Helper functions
@@ -100,7 +56,7 @@ def mock_command(script, args, container):
in unit tests in unit tests
''' '''
full_script_path = '/usr/local/bin/{}'.format(script) full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent('''\ mock_script = dedent(r'''\
#!/bin/bash -e #!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script} echo "\$0 \$@" >> /var/log/{script}
case "\$1" in'''.format(script=script)) case "\$1" in'''.format(script=script))
@@ -121,13 +77,75 @@ def mock_command(script, args, container):
scriptlog=script)) scriptlog=script))
def mock_command_passthrough(script, args, container):
'''
Per other mock_command* functions, allows intercepting of commands we don't want to run for real
in unit tests, however also allows only specific arguments to be mocked. Anything not defined will
be passed through to the actual command.
Example use-case: mocking `git pull` but still allowing `git clone` to work as intended
'''
orig_script_path = container.check_output('command -v {}'.format(script))
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1" in'''.format(script=script))
for k, v in args.items():
case = dedent('''
{arg})
echo {res}
exit {retcode}
;;'''.format(arg=k, res=v[0], retcode=v[1]))
mock_script += case
mock_script += dedent(r'''
*)
{orig_script_path} "\$@"
;;'''.format(orig_script_path=orig_script_path))
mock_script += dedent('''
esac''')
container.run('''
cat <<EOF> {script}\n{content}\nEOF
chmod +x {script}
rm -f /var/log/{scriptlog}'''.format(script=full_script_path,
content=mock_script,
scriptlog=script))
def mock_command_run(script, args, container):
'''
Allows for setup of commands we don't really want to have to run for real
in unit tests
'''
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1 \$2" in'''.format(script=script))
for k, v in args.items():
case = dedent('''
\"{arg}\")
echo {res}
exit {retcode}
;;'''.format(arg=k, res=v[0], retcode=v[1]))
mock_script += case
mock_script += dedent('''
esac''')
container.run('''
cat <<EOF> {script}\n{content}\nEOF
chmod +x {script}
rm -f /var/log/{scriptlog}'''.format(script=full_script_path,
content=mock_script,
scriptlog=script))
def mock_command_2(script, args, container): def mock_command_2(script, args, container):
''' '''
Allows for setup of commands we don't really want to have to run for real Allows for setup of commands we don't really want to have to run for real
in unit tests in unit tests
''' '''
full_script_path = '/usr/local/bin/{}'.format(script) full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent('''\ mock_script = dedent(r'''\
#!/bin/bash -e #!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script} echo "\$0 \$@" >> /var/log/{script}
case "\$1 \$2" in'''.format(script=script)) case "\$1 \$2" in'''.format(script=script))

View File

@@ -1,6 +1,6 @@
docker-compose==1.23.2 docker-compose
pytest==4.3.0 pytest
pytest-xdist==1.26.1 pytest-xdist
pytest-cov==2.6.1 pytest-cov
testinfra==1.19.0 pytest-testinfra
tox==3.7.0 tox

File diff suppressed because it is too large Load Diff

16
test/test_any_utils.py Normal file
View File

@@ -0,0 +1,16 @@
def test_key_val_replacement_works(host):
''' Confirms addOrEditKeyValPair provides the expected output '''
host.run('''
setupvars=./testoutput
source /opt/pihole/utils.sh
addOrEditKeyValPair "KEY_ONE" "value1" "./testoutput"
addOrEditKeyValPair "KEY_TWO" "value2" "./testoutput"
addOrEditKeyValPair "KEY_ONE" "value3" "./testoutput"
addOrEditKeyValPair "KEY_FOUR" "value4" "./testoutput"
cat ./testoutput
''')
output = host.run('''
cat ./testoutput
''')
expected_stdout = 'KEY_ONE=value3\nKEY_TWO=value2\nKEY_FOUR=value4\n'
assert expected_stdout == output.stdout

View File

@@ -1,639 +0,0 @@
from textwrap import dedent
import re
from .conftest import (
SETUPVARS,
tick_box,
info_box,
cross_box,
mock_command,
mock_command_2,
run_script
)
def test_supported_package_manager(Pihole):
'''
confirm installer exits when no supported package manager found
'''
# break supported package managers
Pihole.run('rm -rf /usr/bin/apt-get')
Pihole.run('rm -rf /usr/bin/rpm')
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
''')
expected_stdout = cross_box + ' No supported package manager found'
assert expected_stdout in package_manager_detect.stdout
# assert package_manager_detect.rc == 1
def test_setupVars_are_sourced_to_global_scope(Pihole):
'''
currently update_dialogs sources setupVars with a dot,
then various other functions use the variables.
This confirms the sourced variables are in scope between functions
'''
setup_var_file = 'cat <<EOF> /etc/pihole/setupVars.conf\n'
for k, v in SETUPVARS.items():
setup_var_file += "{}={}\n".format(k, v)
setup_var_file += "EOF\n"
Pihole.run(setup_var_file)
script = dedent('''\
set -e
printSetupVars() {
# Currently debug test function only
echo "Outputting sourced variables"
echo "PIHOLE_INTERFACE=${PIHOLE_INTERFACE}"
echo "PIHOLE_DNS_1=${PIHOLE_DNS_1}"
echo "PIHOLE_DNS_2=${PIHOLE_DNS_2}"
}
update_dialogs() {
. /etc/pihole/setupVars.conf
}
update_dialogs
printSetupVars
''')
output = run_script(Pihole, script).stdout
for k, v in SETUPVARS.items():
assert "{}={}".format(k, v) in output
def test_setupVars_saved_to_file(Pihole):
'''
confirm saved settings are written to a file for future updates to re-use
'''
# dedent works better with this and padding matching script below
set_setup_vars = '\n'
for k, v in SETUPVARS.items():
set_setup_vars += " {}={}\n".format(k, v)
Pihole.run(set_setup_vars).stdout
script = dedent('''\
set -e
echo start
TERM=xterm
source /opt/pihole/basic-install.sh
{}
mkdir -p /etc/dnsmasq.d
version_check_dnsmasq
echo "" > /etc/pihole/pihole-FTL.conf
finalExports
cat /etc/pihole/setupVars.conf
'''.format(set_setup_vars))
output = run_script(Pihole, script).stdout
for k, v in SETUPVARS.items():
assert "{}={}".format(k, v) in output
def test_selinux_not_detected(Pihole):
'''
confirms installer continues when SELinux configuration file does not exist
'''
check_selinux = Pihole.run('''
rm -f /etc/selinux/config
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = info_box + ' SELinux not detected'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
def test_installPiholeWeb_fresh_install_no_errors(Pihole):
'''
confirms all web page assets from Core repo are installed on a fresh build
'''
installWeb = Pihole.run('''
source /opt/pihole/basic-install.sh
installPiholeWeb
''')
expected_stdout = info_box + ' Installing blocking page...'
assert expected_stdout in installWeb.stdout
expected_stdout = tick_box + (' Creating directory for blocking page, '
'and copying files')
assert expected_stdout in installWeb.stdout
expected_stdout = info_box + ' Backing up index.lighttpd.html'
assert expected_stdout in installWeb.stdout
expected_stdout = ('No default index.lighttpd.html file found... '
'not backing up')
assert expected_stdout in installWeb.stdout
expected_stdout = tick_box + ' Installing sudoer file'
assert expected_stdout in installWeb.stdout
web_directory = Pihole.run('ls -r /var/www/html/pihole').stdout
assert 'index.php' in web_directory
assert 'blockingpage.css' in web_directory
def test_update_package_cache_success_no_errors(Pihole):
'''
confirms package cache was updated without any errors
'''
updateCache = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
update_package_cache
''')
expected_stdout = tick_box + ' Update local cache of available packages'
assert expected_stdout in updateCache.stdout
assert 'error' not in updateCache.stdout.lower()
def test_update_package_cache_failure_no_errors(Pihole):
'''
confirms package cache was not updated
'''
mock_command('apt-get', {'update': ('', '1')}, Pihole)
updateCache = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
update_package_cache
''')
expected_stdout = cross_box + ' Update local cache of available packages'
assert expected_stdout in updateCache.stdout
assert 'Error: Unable to update package cache.' in updateCache.stdout
def test_FTL_detect_aarch64_no_errors(Pihole):
'''
confirms only aarch64 package is downloaded for FTL engine
'''
# mock uname to return aarch64 platform
mock_command('uname', {'-m': ('aarch64', '0')}, Pihole)
# mock ldd to respond with aarch64 shared library
mock_command(
'ldd',
{
'/bin/ls': (
'/lib/ld-linux-aarch64.so.1',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected AArch64 (64 Bit ARM) processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv4t_no_errors(Pihole):
'''
confirms only armv4t package is downloaded for FTL engine
'''
# mock uname to return armv4t platform
mock_command('uname', {'-m': ('armv4t', '0')}, Pihole)
# mock ldd to respond with ld-linux shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv4 processor')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv5te_no_errors(Pihole):
'''
confirms only armv5te package is downloaded for FTL engine
'''
# mock uname to return armv5te platform
mock_command('uname', {'-m': ('armv5te', '0')}, Pihole)
# mock ldd to respond with ld-linux shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv5 (or newer) processor')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv6l_no_errors(Pihole):
'''
confirms only armv6l package is downloaded for FTL engine
'''
# mock uname to return armv6l platform
mock_command('uname', {'-m': ('armv6l', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv6 processor '
'(with hard-float support)')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv7l_no_errors(Pihole):
'''
confirms only armv7l package is downloaded for FTL engine
'''
# mock uname to return armv7l platform
mock_command('uname', {'-m': ('armv7l', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv7 processor '
'(with hard-float support)')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv8a_no_errors(Pihole):
'''
confirms only armv8a package is downloaded for FTL engine
'''
# mock uname to return armv8a platform
mock_command('uname', {'-m': ('armv8a', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected ARMv8 (or newer) processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_x86_64_no_errors(Pihole):
'''
confirms only x86_64 package is downloaded for FTL engine
'''
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected x86_64 processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_unknown_no_errors(Pihole):
''' confirms only generic package is downloaded for FTL engine '''
# mock uname to return generic platform
mock_command('uname', {'-m': ('mips', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = 'Not able to detect processor (unknown: mips)'
assert expected_stdout in detectPlatform.stdout
def test_FTL_download_aarch64_no_errors(Pihole):
'''
confirms only aarch64 package is downloaded for FTL engine
'''
# mock whiptail answers and ensure installer dependencies
mock_command('whiptail', {'*': ('', '0')}, Pihole)
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
download_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
FTLinstall "pihole-FTL-aarch64-linux-gnu"
''')
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in download_binary.stdout
assert 'error' not in download_binary.stdout.lower()
def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
'''
confirms FTL binary is copied and functional in installed location
'''
installed_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
pihole-FTL version
''')
expected_stdout = 'v'
assert expected_stdout in installed_binary.stdout
# def test_FTL_support_files_installed(Pihole):
# '''
# confirms FTL support files are installed
# '''
# support_files = Pihole.run('''
# source /opt/pihole/basic-install.sh
# FTLdetect
# stat -c '%a %n' /var/log/pihole-FTL.log
# stat -c '%a %n' /run/pihole-FTL.port
# stat -c '%a %n' /run/pihole-FTL.pid
# ls -lac /run
# ''')
# assert '644 /run/pihole-FTL.port' in support_files.stdout
# assert '644 /run/pihole-FTL.pid' in support_files.stdout
# assert '644 /var/log/pihole-FTL.log' in support_files.stdout
def test_IPv6_only_link_local(Pihole):
'''
confirms IPv6 blocking is disabled for Link-local address
'''
# mock ip -6 address to return Link-local address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fe80::d210:52fa:fe00:7ad7/64 scope link',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = ('Unable to find IPv6 ULA/GUA address')
assert expected_stdout in detectPlatform.stdout
def test_IPv6_only_ULA(Pihole):
'''
confirms IPv6 blocking is enabled for ULA addresses
'''
# mock ip -6 address to return ULA address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_only_GUA(Pihole):
'''
confirms IPv6 blocking is enabled for GUA addresses
'''
# mock ip -6 address to return GUA address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 GUA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_GUA_ULA_test(Pihole):
'''
confirms IPv6 blocking is enabled for GUA and ULA addresses
'''
# mock ip -6 address to return GUA and ULA addresses
mock_command_2(
'ip',
{
'-6 address': (
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global\n'
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_ULA_GUA_test(Pihole):
'''
confirms IPv6 blocking is enabled for GUA and ULA addresses
'''
# mock ip -6 address to return ULA and GUA addresses
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global\n'
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_validate_ip(Pihole):
'''
Tests valid_ip for various IP addresses
'''
def test_address(addr, success=True):
output = Pihole.run('''
source /opt/pihole/basic-install.sh
valid_ip "{addr}"
'''.format(addr=addr))
assert output.rc == 0 if success else 1
test_address('192.168.1.1')
test_address('127.0.0.1')
test_address('255.255.255.255')
test_address('255.255.255.256', False)
test_address('255.255.256.255', False)
test_address('255.256.255.255', False)
test_address('256.255.255.255', False)
test_address('1092.168.1.1', False)
test_address('not an IP', False)
test_address('8.8.8.8#', False)
test_address('8.8.8.8#0')
test_address('8.8.8.8#1')
test_address('8.8.8.8#42')
test_address('8.8.8.8#888')
test_address('8.8.8.8#1337')
test_address('8.8.8.8#65535')
test_address('8.8.8.8#65536', False)
test_address('8.8.8.8#-1', False)
test_address('00.0.0.0', False)
test_address('010.0.0.0', False)
test_address('001.0.0.0', False)
test_address('0.0.0.0#00', False)
test_address('0.0.0.0#01', False)
test_address('0.0.0.0#001', False)
test_address('0.0.0.0#0001', False)
test_address('0.0.0.0#00001', False)
def test_os_check_fails(Pihole):
''' Confirms install fails on unsupported OS '''
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${OS_CHECK_DEPS[@]}
install_dependent_packages ${INSTALLER_DEPS[@]}
cat <<EOT > /etc/os-release
ID=UnsupportedOS
VERSION_ID="2"
EOT
''')
detectOS = Pihole.run('''t
source /opt/pihole/basic-install.sh
os_check
''')
expected_stdout = 'Unsupported OS detected: UnsupportedOS'
assert expected_stdout in detectOS.stdout
def test_os_check_passes(Pihole):
''' Confirms OS meets the requirements '''
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${OS_CHECK_DEPS[@]}
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
detectOS = Pihole.run('''
source /opt/pihole/basic-install.sh
os_check
''')
expected_stdout = 'Supported OS detected'
assert expected_stdout in detectOS.stdout
def test_package_manager_has_installer_deps(Pihole):
''' Confirms OS is able to install the required packages for the installer'''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0
def test_package_manager_has_pihole_deps(Pihole):
''' Confirms OS is able to install the required packages for Pi-hole '''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
install_dependent_packages ${PIHOLE_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0
def test_package_manager_has_web_deps(Pihole):
''' Confirms OS is able to install the required packages for web '''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
install_dependent_packages ${PIHOLE_WEB_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0

View File

@@ -5,11 +5,11 @@ from .conftest import (
) )
def test_php_upgrade_default_optout_centos_eq_7(Pihole): def test_php_upgrade_default_optout_centos_eq_7(host):
''' '''
confirms the default behavior to opt-out of installing PHP7 from REMI confirms the default behavior to opt-out of installing PHP7 from REMI
''' '''
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -17,18 +17,18 @@ def test_php_upgrade_default_optout_centos_eq_7(Pihole):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. ' expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.') 'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_upgrade_user_optout_centos_eq_7(Pihole): def test_php_upgrade_user_optout_centos_eq_7(host):
''' '''
confirms installer behavior when user opt-out of installing PHP7 from REMI confirms installer behavior when user opt-out of installing PHP7 from REMI
(php not currently installed) (php not currently installed)
''' '''
# Whiptail dialog returns Cancel for user prompt # Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, Pihole) mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -36,18 +36,18 @@ def test_php_upgrade_user_optout_centos_eq_7(Pihole):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. ' expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.') 'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_upgrade_user_optin_centos_eq_7(Pihole): def test_php_upgrade_user_optin_centos_eq_7(host):
''' '''
confirms installer behavior when user opt-in to installing PHP7 from REMI confirms installer behavior when user opt-in to installing PHP7 from REMI
(php not currently installed) (php not currently installed)
''' '''
# Whiptail dialog returns Continue for user prompt # Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole) mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -59,5 +59,5 @@ def test_php_upgrade_user_optin_centos_eq_7(Pihole):
expected_stdout = tick_box + (' Remi\'s RPM repository has ' expected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7') 'been enabled for PHP7')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert remi_package.is_installed assert remi_package.is_installed

View File

@@ -5,12 +5,12 @@ from .conftest import (
) )
def test_php_upgrade_default_continue_centos_gte_8(Pihole): def test_php_upgrade_default_continue_centos_gte_8(host):
''' '''
confirms the latest version of CentOS continues / does not optout confirms the latest version of CentOS continues / does not optout
(should trigger on CentOS7 only) (should trigger on CentOS7 only)
''' '''
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -19,19 +19,19 @@ def test_php_upgrade_default_continue_centos_gte_8(Pihole):
' Deprecated PHP may be in use.') ' Deprecated PHP may be in use.')
assert unexpected_stdout not in package_manager_detect.stdout assert unexpected_stdout not in package_manager_detect.stdout
# ensure remi was not installed on latest CentOS # ensure remi was not installed on latest CentOS
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_upgrade_user_optout_skipped_centos_gte_8(Pihole): def test_php_upgrade_user_optout_skipped_centos_gte_8(host):
''' '''
confirms installer skips user opt-out of installing PHP7 from REMI on confirms installer skips user opt-out of installing PHP7 from REMI on
latest CentOS (should trigger on CentOS7 only) latest CentOS (should trigger on CentOS7 only)
(php not currently installed) (php not currently installed)
''' '''
# Whiptail dialog returns Cancel for user prompt # Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, Pihole) mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -40,19 +40,19 @@ def test_php_upgrade_user_optout_skipped_centos_gte_8(Pihole):
' Deprecated PHP may be in use.') ' Deprecated PHP may be in use.')
assert unexpected_stdout not in package_manager_detect.stdout assert unexpected_stdout not in package_manager_detect.stdout
# ensure remi was not installed on latest CentOS # ensure remi was not installed on latest CentOS
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_upgrade_user_optin_skipped_centos_gte_8(Pihole): def test_php_upgrade_user_optin_skipped_centos_gte_8(host):
''' '''
confirms installer skips user opt-in to installing PHP7 from REMI on confirms installer skips user opt-in to installing PHP7 from REMI on
latest CentOS (should trigger on CentOS7 only) latest CentOS (should trigger on CentOS7 only)
(php not currently installed) (php not currently installed)
''' '''
# Whiptail dialog returns Continue for user prompt # Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole) mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -64,5 +64,5 @@ def test_php_upgrade_user_optin_skipped_centos_gte_8(Pihole):
unexpected_stdout = tick_box + (' Remi\'s RPM repository has ' unexpected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7') 'been enabled for PHP7')
assert unexpected_stdout not in package_manager_detect.stdout assert unexpected_stdout not in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed

View File

@@ -7,13 +7,13 @@ from .conftest import (
) )
def test_release_supported_version_check_centos(Pihole): def test_release_supported_version_check_centos(host):
''' '''
confirms installer exits on unsupported releases of CentOS confirms installer exits on unsupported releases of CentOS
''' '''
# modify /etc/redhat-release to mock an unsupported CentOS release # modify /etc/redhat-release to mock an unsupported CentOS release
Pihole.run('echo "CentOS Linux release 6.9" > /etc/redhat-release') host.run('echo "CentOS Linux release 6.9" > /etc/redhat-release')
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -24,11 +24,11 @@ def test_release_supported_version_check_centos(Pihole):
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
def test_enable_epel_repository_centos(Pihole): def test_enable_epel_repository_centos(host):
''' '''
confirms the EPEL package repository is enabled when installed on CentOS confirms the EPEL package repository is enabled when installed on CentOS
''' '''
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -38,22 +38,22 @@ def test_enable_epel_repository_centos(Pihole):
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
expected_stdout = tick_box + ' Installed epel-release' expected_stdout = tick_box + ' Installed epel-release'
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
epel_package = Pihole.package('epel-release') epel_package = host.package('epel-release')
assert epel_package.is_installed assert epel_package.is_installed
def test_php_version_lt_7_detected_upgrade_default_optout_centos(Pihole): def test_php_version_lt_7_detected_upgrade_default_optout_centos(host):
''' '''
confirms the default behavior to opt-out of upgrading to PHP7 from REMI confirms the default behavior to opt-out of upgrading to PHP7 from REMI
''' '''
# first we will install the default php version to test installer behavior # first we will install the default php version to test installer behavior
php_install = Pihole.run('yum install -y php') php_install = host.run('yum install -y php')
assert php_install.rc == 0 assert php_install.rc == 0
php_package = Pihole.package('php') php_package = host.package('php')
default_centos_php_version = php_package.version.split('.')[0] default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7") pytest.skip("Test deprecated . Detected default PHP version >= 7")
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -61,24 +61,24 @@ def test_php_version_lt_7_detected_upgrade_default_optout_centos(Pihole):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. ' expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.') 'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_version_lt_7_detected_upgrade_user_optout_centos(Pihole): def test_php_version_lt_7_detected_upgrade_user_optout_centos(host):
''' '''
confirms installer behavior when user opt-out to upgrade to PHP7 via REMI confirms installer behavior when user opt-out to upgrade to PHP7 via REMI
''' '''
# first we will install the default php version to test installer behavior # first we will install the default php version to test installer behavior
php_install = Pihole.run('yum install -y php') php_install = host.run('yum install -y php')
assert php_install.rc == 0 assert php_install.rc == 0
php_package = Pihole.package('php') php_package = host.package('php')
default_centos_php_version = php_package.version.split('.')[0] default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7") pytest.skip("Test deprecated . Detected default PHP version >= 7")
# Whiptail dialog returns Cancel for user prompt # Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, Pihole) mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -86,24 +86,24 @@ def test_php_version_lt_7_detected_upgrade_user_optout_centos(Pihole):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. ' expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.') 'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed
def test_php_version_lt_7_detected_upgrade_user_optin_centos(Pihole): def test_php_version_lt_7_detected_upgrade_user_optin_centos(host):
''' '''
confirms installer behavior when user opt-in to upgrade to PHP7 via REMI confirms installer behavior when user opt-in to upgrade to PHP7 via REMI
''' '''
# first we will install the default php version to test installer behavior # first we will install the default php version to test installer behavior
php_install = Pihole.run('yum install -y php') php_install = host.run('yum install -y php')
assert php_install.rc == 0 assert php_install.rc == 0
php_package = Pihole.package('php') php_package = host.package('php')
default_centos_php_version = php_package.version.split('.')[0] default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7") pytest.skip("Test deprecated . Detected default PHP version >= 7")
# Whiptail dialog returns Continue for user prompt # Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole) mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
@@ -118,8 +118,8 @@ def test_php_version_lt_7_detected_upgrade_user_optin_centos(Pihole):
expected_stdout = tick_box + (' Remi\'s RPM repository has ' expected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7') 'been enabled for PHP7')
assert expected_stdout in package_manager_detect.stdout assert expected_stdout in package_manager_detect.stdout
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert remi_package.is_installed assert remi_package.is_installed
updated_php_package = Pihole.package('php') updated_php_package = host.package('php')
updated_php_version = updated_php_package.version.split('.')[0] updated_php_version = updated_php_package.version.split('.')[0]
assert int(updated_php_version) == 7 assert int(updated_php_version) == 7

View File

@@ -5,7 +5,7 @@ from .conftest import (
) )
def mock_selinux_config(state, Pihole): def mock_selinux_config(state, host):
''' '''
Creates a mock SELinux config file with expected content Creates a mock SELinux config file with expected content
''' '''
@@ -13,20 +13,20 @@ def mock_selinux_config(state, Pihole):
valid_states = ['enforcing', 'permissive', 'disabled'] valid_states = ['enforcing', 'permissive', 'disabled']
assert state in valid_states assert state in valid_states
# getenforce returns the running state of SELinux # getenforce returns the running state of SELinux
mock_command('getenforce', {'*': (state.capitalize(), '0')}, Pihole) mock_command('getenforce', {'*': (state.capitalize(), '0')}, host)
# create mock configuration with desired content # create mock configuration with desired content
Pihole.run(''' host.run('''
mkdir /etc/selinux mkdir /etc/selinux
echo "SELINUX={state}" > /etc/selinux/config echo "SELINUX={state}" > /etc/selinux/config
'''.format(state=state.lower())) '''.format(state=state.lower()))
def test_selinux_enforcing_exit(Pihole): def test_selinux_enforcing_exit(host):
''' '''
confirms installer prompts to exit when SELinux is Enforcing by default confirms installer prompts to exit when SELinux is Enforcing by default
''' '''
mock_selinux_config("enforcing", Pihole) mock_selinux_config("enforcing", host)
check_selinux = Pihole.run(''' check_selinux = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
checkSelinux checkSelinux
''') ''')
@@ -37,12 +37,12 @@ def test_selinux_enforcing_exit(Pihole):
assert check_selinux.rc == 1 assert check_selinux.rc == 1
def test_selinux_permissive(Pihole): def test_selinux_permissive(host):
''' '''
confirms installer continues when SELinux is Permissive confirms installer continues when SELinux is Permissive
''' '''
mock_selinux_config("permissive", Pihole) mock_selinux_config("permissive", host)
check_selinux = Pihole.run(''' check_selinux = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
checkSelinux checkSelinux
''') ''')
@@ -51,12 +51,12 @@ def test_selinux_permissive(Pihole):
assert check_selinux.rc == 0 assert check_selinux.rc == 0
def test_selinux_disabled(Pihole): def test_selinux_disabled(host):
''' '''
confirms installer continues when SELinux is Disabled confirms installer continues when SELinux is Disabled
''' '''
mock_selinux_config("disabled", Pihole) mock_selinux_config("disabled", host)
check_selinux = Pihole.run(''' check_selinux = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
checkSelinux checkSelinux
''') ''')

View File

@@ -1,16 +1,16 @@
def test_epel_and_remi_not_installed_fedora(Pihole): def test_epel_and_remi_not_installed_fedora(host):
''' '''
confirms installer does not attempt to install EPEL/REMI repositories confirms installer does not attempt to install EPEL/REMI repositories
on Fedora on Fedora
''' '''
package_manager_detect = Pihole.run(''' package_manager_detect = host.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
package_manager_detect package_manager_detect
select_rpm_php select_rpm_php
''') ''')
assert package_manager_detect.stdout == '' assert package_manager_detect.stdout == ''
epel_package = Pihole.package('epel-release') epel_package = host.package('epel-release')
assert not epel_package.is_installed assert not epel_package.is_installed
remi_package = Pihole.package('remi-release') remi_package = host.package('remi-release')
assert not remi_package.is_installed assert not remi_package.is_installed

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _centos_7.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _centos_7.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_7_support.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_7_support.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _centos_8.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _centos_8.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_8_support.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_8_support.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _debian_10.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _debian_10.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _debian_11.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _debian_11.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _debian_9.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _debian_9.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _fedora_33.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _fedora_33.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_fedora_support.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_fedora_support.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _fedora_34.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _fedora_34.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_fedora_support.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_fedora_support.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _ubuntu_16.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _ubuntu_16.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _ubuntu_18.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _ubuntu_18.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _ubuntu_20.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _ubuntu_20.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py

View File

@@ -1,8 +1,8 @@
[tox] [tox]
envlist = py37 envlist = py38
[testenv] [testenv]
whitelist_externals = docker whitelist_externals = docker
deps = -rrequirements.txt deps = -rrequirements.txt
commands = docker build -f _ubuntu_21.Dockerfile -t pytest_pihole:test_container ../ commands = docker build -f _ubuntu_21.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py