Compare commits

...

370 Commits

Author SHA1 Message Date
Adam Warner
4d25f69526 Merge pull request #3321 from pi-hole/release/v5.0
Pi-hole core v5.0
2020-05-10 19:07:53 +01:00
DL6ER
e728d7f761 Merge pull request #3318 from pi-hole/tweak/default_group
Rename default group to ... well ... "Default"
2020-05-07 19:24:04 +02:00
DL6ER
7cc35d3b04 Add update to gravity database version 12, renaming the Unassociated group to Default group.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-05-07 18:01:37 +02:00
DL6ER
78469ee58d Merge pull request #3255 from pi-hole/tweak/emailregex
Enhanced email validation regex
2020-05-06 09:48:26 +02:00
Adam Warner
369288cc48 Update advanced/Scripts/webpage.sh
Co-authored-by: DL6ER <DL6ER@users.noreply.github.com>
2020-05-06 08:40:54 +01:00
Dan Schaper
df13b9c32a Merge pull request #3283 from pi-hole/tweak/remove_firewall_config
Remove configureFirewall function, the call to it, and related tests
2020-05-02 10:06:31 -07:00
Dan Schaper
017d405b28 Merge pull request #3307 from pi-hole/tweak/debugger_type_display
Improve debugger database table printing
2020-04-29 11:48:47 -07:00
DL6ER
ddb354f78b Add enable indentation for the domainlist
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-29 06:23:29 +02:00
Dan Schaper
393c7730ec Merge pull request #3299 from pi-hole/tweak/allow_()_in_urls
Allow ( and ) in adlist URLs.
2020-04-28 11:38:08 -07:00
DL6ER
4f0e47e927 Merge pull request #3296 from pi-hole/fix/remove_hostrecord
Remove pihole -a hostrecord
2020-04-28 20:06:49 +02:00
DL6ER
288d487fc0 Allow ( and ) in adlist URLs.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-26 09:33:09 +02:00
DL6ER
20ef5e0264 Show associated group IDs in domains/clients/adlists listing. We get the data through a LEFT JOIN followed by a GROUPing by the left list ID and finialized through a GROUP_CONCATenation.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-24 10:33:46 +02:00
DL6ER
ad5802715e enabled field: Center 0, right-align 1
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-23 15:16:48 +02:00
DL6ER
989bbad37e Remove pihole -a hostrecord
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-23 13:20:15 +02:00
DL6ER
63f6c6a894 Add indentation for enabled and type fields
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-23 10:01:22 +02:00
Adam Warner
d42785a3bf Merge pull request #3271 from pi-hole/tweak/version
Add branch name to pihole -v
2020-04-21 16:07:27 +01:00
DL6ER
401c029dc4 Improve else condition of branch determination
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-21 16:08:32 +02:00
DL6ER
ed9d74593d Merge pull request #3289 from pi-hole/tweak/boldify_uniques
Gravity: Boldify number of unique domains
2020-04-21 10:18:23 +02:00
Adam Warner
9286965ee2 Merge pull request #3287 from pi-hole/tweak/remove-deprecated-list
Remove Deprecated cameleon list
2020-04-21 08:32:24 +01:00
DL6ER
fa57c457f3 Boldify number of unique domains as this is the actually interesting number
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-21 09:10:21 +02:00
DL6ER
0343171703 Add correct displaying for detached HEAD state.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-21 08:54:28 +02:00
DL6ER
176fbaf83b Ask pihole-FTL for the branch it was compiled from instead of trusting the checkout file to be present.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-21 08:51:17 +02:00
Adam Warner
94a4f844a8 Remove deprecated list
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-20 21:31:20 +01:00
Adam Warner
a37dba2c81 remove configureFirewall function, the call to it, and related tests
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-19 14:52:01 +01:00
Adam Warner
471006676c Merge pull request #3227 from pi-hole/new/CLI_domain_comments
Add option --comment "whatever" for adding comments for new domains through the CLI interface.
2020-04-19 14:39:05 +01:00
Adam Warner
0155d42650 Merge pull request #3252 from yubiuser/patch-1
add [options] for 'pihole restart' to manpage and cli help output
2020-04-19 14:35:24 +01:00
Adam Warner
3cc9ba4ee8 stickler Signed-off-by: Adam Warner <me@adamwarner.co.uk> 2020-04-18 12:57:06 +01:00
Adam Warner
6dc85c3527 Don't display branch name if it is on master.
Prefer cached remote version over github API

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-18 12:51:04 +01:00
Dan Schaper
4f01daf5bc Merge pull request #3244 from atenart/remove-hosts-file-ads-list
basic_install: remove remaining references to hosts-file.net
2020-04-16 13:33:34 -07:00
Dan Schaper
0f20470a38 Merge pull request #3269 from pi-hole/tweak/hosts-comments
Add support for comments in HOSTS-like files
2020-04-15 11:11:10 -07:00
Adam Warner
851947bbf2 Add branch name to version output
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-13 20:58:46 +01:00
Dan Schaper
413fa94e98 Merge pull request #3263 from mschoettle/fix/broken-blocking-landing-page-v5.0
Fixes broken blocking page and landing page when changing server port or host name (v5.0)
2020-04-10 20:50:26 -07:00
Matthias Schoettle
308eb5eda5 Fixes broken blocking page and landing page when changing server port and/or hostname.
See issues #2195 and #2720.

Signed-off-by: Matthias Schoettle <git@mattsch.com>
2020-04-10 12:29:01 -04:00
Adam Warner
26f71e4dbe accidentally a space
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-05 12:34:14 +01:00
Adam Warner
b6ac1585ec add regex attribution
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-05 12:29:45 +01:00
Adam Warner
a9b19df4ec expand email validation regex to catch more valid emails see comments on PR #3254
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-04-05 12:28:33 +01:00
yubiuser
d27a565d39 Apply suggestions from code review
Co-Authored-By: DL6ER <DL6ER@users.noreply.github.com>
Signed-off-by: Christian König <ckoenig@posteo.de>
2020-04-05 11:46:41 +02:00
M4x
2de5362adc Sanitize email address in case of security issues (#3254)
* Sanitize email address in case of security issues

Signed-off-by: bash-c <aboultraman@gmail.com>
2020-04-05 10:20:35 +01:00
Christian König
de42669bb7 fix typo in pihole help
Signed-off-by: Christian König <ckoenig@posteo.de>
2020-04-05 08:56:10 +02:00
Christian König
3095fd4dd6 add restart [options] to cli help
Signed-off-by: Christian König <ckoenig@posteo.de>
2020-04-05 08:49:35 +02:00
yubiuser
ebbb7168a4 add [options] for pihole restartdns
Signed-off-by: Christian König <ckoenig@posteo.de>
2020-04-04 22:47:14 +02:00
Antoine Tenart
16f664cdb4 basic_install: remove remaining references to hosts-file.net
Commit dc35709a1b ("Remove hosts-file.net from default lists") left a
few references to hosts-file.net. Removes them.

Signed-off-by: Antoine Tenart <antoine.tenart@ack.tf>
2020-04-02 21:23:55 +02:00
DL6ER
a2d2639ee8 Merge pull request #3242 from pi-hole/fix/do_not_flush_neigh_cache
Do not flush neigh cache
2020-04-01 20:50:28 +02:00
DL6ER
d1caad76d8 Do not flush neigh cache as this is known to create a number of issues. The better aproach to this is to manually flush the ARP cache by either restarting or calling "ip neigh flush all".
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-04-01 17:19:32 +00:00
DL6ER
fff7adfb20 Merge pull request #3236 from pi-hole/PromoFaux-patch-1
Remove hosts-file.net from default lists
2020-03-31 23:23:19 +02:00
Adam Warner
7d19ee1b25 validate blocklist URL before adding to the database (#3237)
Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Co-authored-by: DL6ER <dl6er@dl6er.de>
2020-03-31 21:48:10 +01:00
DL6ER
7b15a88dc4 Strip comments from downloaded lists instead of discarding lines with comments altogether
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-03-31 18:36:40 +00:00
Adam Warner
dc35709a1b Remove hosts-file.net from default lists 2020-03-31 17:39:21 +01:00
DL6ER
0fad979206 Merge pull request #3230 from pi-hole/fix/remove-19036
Remove 19036 trust anchor
2020-03-27 19:57:41 +01:00
DL6ER
277179f150 Remove 19036 trust anchor, now expired: https://www.icann.org/resources/pages/ksk-rollover
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-03-27 19:34:41 +01:00
DL6ER
15a9d662ac Add option --comment "whatever" for adding comments for new domains through the CLI interface.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-03-27 08:45:04 +01:00
Adam Warner
1b35eebad8 Merge pull request #3207 from pi-hole/tweak/resolvconf
Remove resolvconf dependency
2020-03-24 13:11:22 +00:00
Adam Warner
4994da5170 Update automated install/basic-install.sh 2020-03-12 18:48:40 +00:00
Adam Warner
175d32c5f6 Set nameservers to be that which have been chosen by the user in the whiptail
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-03-11 18:55:43 +00:00
Adam Warner
1481cc583f Don't set nameserver in dhcpcd.conf
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-03-11 18:48:40 +00:00
Adam Warner
dbc54b3063 remove resolvconf dep
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-03-11 18:47:59 +00:00
DL6ER
22ce5c0d70 Fix incorrect type description. (#3201)
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-03-08 16:32:37 -07:00
Dan Schaper
f617ed2f44 Merge pull request #3186 from pi-hole/fix/awkInQuery
Malformed wildcard blocking doesn't crash awk.
2020-03-02 12:39:26 -08:00
Dan Schaper
bf4fada3b7 Don't quote inside backticks, use unquoted variable.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2020-03-02 09:52:06 -08:00
Dan Schaper
360d0e4e6b Loop through array of lists.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2020-03-02 08:07:10 -08:00
Dan Schaper
4f390ce801 Use bash regex instead of awk.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2020-03-02 05:39:21 -08:00
Adam Warner
dc8ae4f0ab Merge pull request #3127 from pi-hole/fix/removeFunding
Delete FUNDING.yml
2020-02-28 22:47:20 +00:00
Adam Warner
d2a8b4d2b9 Merge pull request #3180 from pi-hole/release/v4.4
Tidying up
2020-02-28 22:34:41 +00:00
Adam Warner
c07d86b9f9 Merge branch 'release/v5.0' into release/v4.4 2020-02-28 22:24:11 +00:00
Adam Warner
9e490775ff Merge pull request #3166 from pi-hole/release/v4.4
Add use-application-dns.net = NXDOMAIN in ProcessDNSSettings rather t…
2020-02-25 20:59:35 +00:00
Adam Warner
58785020bd Merge pull request #3161 from pi-hole/cherry-pick-4.3.5
cherry pick 4.3.5 into 5.0
2020-02-25 20:42:39 +00:00
DL6ER
1c74b41869 Add use-application-dns.net = NXDOMAIN in ProcessDNSSettings rather than in the template so we can ensure that it will survive config-renewals.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-25 20:41:35 +00:00
Adam Warner
6104d81622 Safeguard against colour output in grep commandadd -i to grep to make search for "Location" case-insensitive
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-24 20:36:45 +00:00
Adam Warner
121c93e822 Merge pull request #3160 from pi-hole/release/v4.3.5
Hotfix 4.3.5
2020-02-24 20:16:54 +00:00
Adam Warner
b4c2bf678f Safeguard against colour output in grep commandadd -i to grep to make search for "Location" case-insensitive
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-24 20:02:48 +00:00
Adam Warner
14944b0283 Merge pull request #3157 from pi-hole/release/v4.3.4
Release v4.3.4
2020-02-24 18:48:03 +00:00
Adam Warner
8ecaaba247 Compare daemons to expected results. (#3158) (#3159)
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

Co-authored-by: Dan Schaper <dan@glacialmagma.com>
2020-02-24 18:00:19 +00:00
Dan Schaper
0fbcc6d8b5 Compare daemons to expected results. (#3158)
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2020-02-24 17:38:37 +00:00
Adam Warner
707e21b927 :dominik: Detect binary name before calling FTLcheckUpdate in update.sh
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-24 12:10:36 +00:00
DL6ER
f4a1cc6dec Merge pull request #3150 from pi-hole/tweak/database_warnings_inspection
Gravity: Check suitablility of sourced lists
2020-02-24 10:18:00 +01:00
DL6ER
3dd05606ca Call it the received number of domains instead of the imported number as importing does only happen a bit later. Only show the number of invalid domains if there are invalid domains.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-24 07:06:15 +01:00
DL6ER
1e8bfd33f5 Improve output
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-23 22:50:06 +01:00
Adam Warner
545b6605bc 4.3.3 (#3154)
* Backport ee7090b8fc to v4 to prevent failures in FTL download
* update tests to reflect changes to FTL download URL
* backport `tbd` fix

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-23 21:34:12 +00:00
DL6ER
8131b5961c Add comments to the code describing the changes.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-22 15:22:29 +01:00
DL6ER
81d4531e10 Implement performant list checking routine.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-22 13:01:55 +01:00
DL6ER
050e2963c7 Remove redundant code.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-21 22:28:53 +01:00
DL6ER
3c09cd4a3a Experimental output of matching line from shown warnings.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-21 21:47:56 +01:00
DL6ER
839fe32042 Fix issue with missing newline at the end of adlists (#3144)
* Also display non-fatal warnings during the database importing. Previously, we have only show warnings when there were also errors (errors are always fatal).

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Ensure there is always a newline on the last line.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Stickler linting

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

* Move sed command into subroutine to avoid code duplication.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Also unify comments.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Also unify comments.

Signed-off-by: DL6ER <dl6er@dl6er.de>

Co-authored-by: Dan Schaper <dan@glacialmagma.com>
2020-02-21 18:56:48 +00:00
Adam Warner
85c15a7167 Merge pull request #3147 from pi-hole/tweak/forcelocalversions
force `updatchecker.sh` run if any of the three components are updated
2020-02-20 18:57:56 +00:00
DL6ER
b73580fa93 Merge pull request #3132 from pi-hole/fix/pihole-tail
Fix pihole -t sed instructions
2020-02-19 19:07:38 +01:00
Adam Warner
4a5f344b09 then
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-19 17:46:45 +00:00
Adam Warner
af95e8c250 force updatchecker.sh run if any of the three components are updated
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-19 17:41:53 +00:00
Adam Warner
ee7090b8fc Merge pull request #3140 from pi-hole/tweak/whocaresaboutthelatesttaganyway
No need to determine the latest tag, we can just go direct.
2020-02-17 21:39:48 +00:00
Adam Warner
7be019ff52 No need to determine the latest tag, we can just go direct
Co-authored-by: Dan Schaper <dan.schaper@pi-hole.net>
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-17 21:29:25 +00:00
DL6ER
d14ee26d6a Merge pull request #3139 from pi-hole/fix/count_before_calling_FTL
Fix wrong number of blocking domains shown on the dashboard
2020-02-17 21:32:45 +01:00
DL6ER
52398052e9 Compute number of domains (and store it in the database) BEFORE calling FTL to re-read said value.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-17 21:07:48 +01:00
DL6ER
ddbd57f459 Merge pull request #3131 from pi-hole/tweak/debugger_performance
Tweaks and fixes for the debugger
2020-02-17 06:23:20 +01:00
DL6ER
601f9048cd Merge pull request #3130 from pi-hole/fix/gravity_updated_timestamp
Store gravity update timestamp only after database swapping
2020-02-17 06:07:05 +01:00
Dan Schaper
c5c414a7a2 Stickler Lint - quote to prevent splitting
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2020-02-16 19:24:05 -08:00
Dan Schaper
bc91be6c08 Merge branch 'tweak/debugger_performance' of https://github.com/pi-hole/pi-hole into tweak/debugger_performance 2020-02-16 17:44:16 -08:00
DL6ER
d0e29ab7b0 Add human-readable output of time of the last gravity run.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-16 17:43:54 -08:00
DL6ER
714a79ffce Migrate debugger to domainlist and add printing of client table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-16 17:43:54 -08:00
DL6ER
cd3ad0bdc7 Show info table instead of counting domains to speed up the debugging process on low-end hardware drastically.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-16 17:43:54 -08:00
DL6ER
a8db753493 Merge pull request #3138 from pi-hole/fix/php-intl
Install php-intl meta package.
2020-02-16 21:51:14 +01:00
DL6ER
75633f0950 Install php-intl and trust the system to install the right extension. We've seen reports that just installing php5-intl or php7-intl isn't sufficient and that we need the meta package as well.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-16 21:24:32 +01:00
Adam Warner
082cfb2f1c Merge pull request #3137 from pi-hole/tweak/apilatest
Change to use API instead of the Location Header
2020-02-16 12:19:31 +00:00
Adam Warner
1072078e26 Change to use API instead of the Location Header
(some trailing whitespace removed)

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2020-02-16 11:47:42 +00:00
DL6ER
f10a151469 Fix pihole -t sed instructions.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-12 21:05:02 +01:00
DL6ER
eadd82761c Add human-readable output of time of the last gravity run.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-12 19:51:40 +01:00
DL6ER
00f4393f48 Merge branch 'release/v5.0' into tweak/debugger_performance 2020-02-12 19:44:56 +01:00
DL6ER
50f6fffbdc Migrate debugger to domainlist and add printing of client table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-12 19:43:55 +01:00
DL6ER
baf5340dc0 Show info table instead of counting domains to speed up the debugging process on low-end hardware drastically.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-12 19:39:12 +01:00
DL6ER
e528903488 Merge pull request #3107 from pi-hole/new/client_comments
Add timestamps and comment fields to clients table
2020-02-12 19:35:01 +01:00
DL6ER
dc2fce8e1d Store gravity update timestamp only after database swapping.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-12 19:26:25 +01:00
Dan Schaper
c4005c4a31 Delete FUNDING.yml
Organization-wide FUNDING now set up.
2020-02-11 09:56:28 -08:00
Adam Warner
0a70bbd255 Merge pull request #3120 from canihavesomecoffee/patch-1
Update Cameleon blacklist url to use https
2020-02-08 17:25:49 +00:00
Willem
c91d9cc0b6 Update Cameleon blacklist url to use https
Switches from http to https for the Cameleon (sysctl.org) blacklist.

Signed-off-by: canihavesomecoffee <canihavesomecoffee@users.noreply.github.com>
2020-02-08 17:06:03 +01:00
DL6ER
8e10c22356 Merge pull request #3106 from pi-hole/fix/group_assignments
DROP and reCREATE TRIGGERs during gravity swapping.
2020-02-07 17:40:34 +01:00
DL6ER
37a44c0773 Merge pull request #3115 from pi-hole/tweak/gravity_count
Store number of distinct gravity domains in database after counting
2020-02-05 23:25:36 +01:00
DL6ER
2a5cf221fa Store number of distinct gravity domains in database after counting.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-02-02 23:46:33 +01:00
DL6ER
92aa510bda Add timestamps and comment fields to clients. This updates the gravity database to version 11.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-27 10:36:16 +00:00
DL6ER
6b04997fc3 DROP and reCREATE TRIGGERs during gravity swapping.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-27 10:12:05 +00:00
Dan Schaper
e0b3405a4d Merge pull request #3098 from pi-hole/fix/pihole-t
Update blocked strings for pihole -t
2020-01-25 12:27:20 -08:00
DL6ER
10c2dad48a Improve gravity performance (#3100)
* Gravity performance improvements.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Do not move downloaded lists into migration_backup directory.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Do not (strictly) sort domains. Random-leaf access is faster than always-last-leaf access (on average).

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Append instead of overwrite gravity_new collection list.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Rename table gravity_new to gravity_temp to clarify that this is only an intermediate table.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Add timers for each of the calls to compute intense parts. They are to be removed before this finally hits the release/v5.0 branch.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Fix legacy list files import. It currently doesn't work when the gravity database has already been updated to using the single domainlist table.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Simplify database_table_from_file(), remove all to this function for gravity lost downloads.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Update gravity.db.sql to version 10 to have newle created databases already reflect the most recent state.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Create second gravity database and swap them on success. This has a number of advantages such as instantaneous gravity updates (as seen from FTL) and always available gravity blocking. Furthermore, this saves disk space as the old database is removed on completion.

* Add timing output for the database swapping SQLite3 call.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Explicitly generate index as a separate process.

Signed-off-by: DL6ER <dl6er@dl6er.de>

* Remove time measurements.

Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-24 09:39:13 -08:00
Dan Schaper
52e2a2610e Merge pull request #3089 from pi-hole/tweak/gravity_db_10
Add gravity database 9->10 update script
2020-01-24 09:23:34 -08:00
DL6ER
a809624356 Update blocked strings for pihole -t.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-23 19:19:44 +01:00
DL6ER
29f06a4444 Merge pull request #3090 from pi-hole/tweak/debug_group_humanreadable_timestamps
Print human-readable timestamps in the debugger's gravity output
2020-01-20 20:20:51 +01:00
DL6ER
3f9e79f152 Print human-readable timestamps in the debugger's gravity output
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-20 20:13:44 +01:00
DL6ER
633e56e8a9 Add gravity database 9->10 update script.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-20 17:59:24 +01:00
DL6ER
bf01f725f7 Merge pull request #3087 from pi-hole/fix/blocking_page
Remove dead code causing failure from the blocking page
2020-01-19 21:50:41 +01:00
DL6ER
276b191845 Remove dead code causing failure from the blocking page.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-19 21:39:49 +01:00
DL6ER
c7bc58e94b Merge pull request #3082 from pi-hole/tweak/gravity_database_locked
Add timeout to gravity database writing
2020-01-14 20:55:12 +01:00
DL6ER
8f22203d24 Wait 30 seconds for obtaining a database lock instead of immediately failing if the database is busy.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-14 20:02:00 +01:00
DL6ER
782fec841e Merge pull request #3076 from pi-hole/new/intl_domains
Add package php-intl for AdminLTE#1130
2020-01-13 17:19:49 +01:00
DL6ER
cfa909a93d Add package php-intl for AdminLTE#1130.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2020-01-12 14:09:14 +01:00
Adam Warner
e0fde41d87 Merge pull request #3066 from pi-hole/centos8_support
Update installer to support CentOS 8
2020-01-05 14:39:24 +00:00
Adam Warner
574f7c1a1f Merge pull request #2962 from bcambl/remove_debconf-apt-progress
Remove debconf apt progress
2020-01-04 16:04:50 +00:00
bcambl
ec8f4050d0 Update installer to support CentOS 8
PHP dependency php-json is now required for both the latest Fedora and CentOS.
Package php-json will now be a default web dependency and removed from PIHOLE_WEB_DEPS when installing on CentOS7.

Signed-off-by: bcambl <blayne@blaynecampbell.com>
2020-01-02 06:52:23 -06:00
bcambl
60c51886e0 remove unused debian deps (apt-utils debconf)
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2020-01-01 13:24:02 -06:00
bcambl
cbb1461010 add stdout horizontal rule to install_dependent_packages()
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2020-01-01 13:23:31 -06:00
bcambl
07cc5b501c replace debconf-apt-progress with apt-get in install_dependent_packages()
Removes the need for conditional debconf-apt-progress dependency checking

Signed-off-by: bcambl <blayne@blaynecampbell.com>
2020-01-01 13:11:41 -06:00
bcambl
ebb1a730c1 remove unused fedora/centos dependency: dialog
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2020-01-01 13:11:41 -06:00
MichaIng
9dff55b212 Installer | Remove "dialog" from Debian/Ubuntu installer deps
+ The installer uses `whiptail`, thus `dialog` is not required.

Signed-off-by: MichaIng <micha@dietpi.com>
2020-01-01 13:11:41 -06:00
DL6ER
8ae03b64d7 Merge pull request #3060 from pi-hole/propsed_8_to_9
Add a new migration script to fix the previous one
2019-12-30 11:58:57 +01:00
DL6ER
bb30c818ab Update database version during migration.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-30 09:21:30 +00:00
Adam Warner
c944f6a320 Add a new migration script to fix the previous one
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-29 23:32:31 +00:00
DL6ER
62ec7de963 Merge pull request #3058 from pi-hole/tweak/7_to_8
Don't create trigger with duplicate name until after old table is del…
2019-12-29 23:05:46 +01:00
Adam Warner
aa4c0ff329 Don't create trigger with duplicate name until after old table is deleted
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-29 20:35:11 +00:00
DL6ER
37217ece73 Merge pull request #3049 from pi-hole/tweak/unique_group_name
Group table enhancements
2019-12-28 14:19:04 +01:00
DL6ER
28d4f4b142 Merge pull request #3045 from pi-hole/tweak/gravity.db_permissions
Set permissions and ownership of gravity.db on pihole -g
2019-12-28 14:17:50 +01:00
DL6ER
8d5d423adb Merge pull request #3052 from pi-hole/revert/76460f0
Revert "Change the regex used for domain validation"
2019-12-21 13:16:12 +01:00
DL6ER
cda0133dd1 Revert "Change the regex used for domain validation"
This reverts commit 76460f01e9.

Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-21 11:15:18 +00:00
DL6ER
eda7f40fef Reinstall trigger that prevents group zero from being deleted.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-20 00:42:59 +00:00
DL6ER
e589e665a7 Also add date_added and date_modified fields to group table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-20 00:21:25 +00:00
DL6ER
b32b5ad6e9 Update gravity database to version 8. This enforces uniqueness on the group name.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-20 00:09:10 +00:00
DL6ER
e2de199f47 Merge pull request #3037 from pi-hole/new/group_zero
Add special group zero to gravity database
2019-12-18 22:36:43 +01:00
DL6ER
948f4a8827 Ensure permissions and ownership of gravity.db are correctly set on each run of pihole -g. This would have prevented https://github.com/pi-hole/AdminLTE/issues/1077
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-16 09:55:46 +00:00
DL6ER
a1633123aa Merge pull request #3035 from pi-hole/fix/query_gravity
pihole -q should also scan gravity table
2019-12-16 01:45:10 +01:00
DL6ER
2444296348 Again, Mr. Stickler
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-15 11:55:19 +00:00
DL6ER
4be7ebe61f Scan domainlist instead of view to also catch disabled domains.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-15 11:47:53 +00:00
DL6ER
a720fe1789 Add client trigger.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 22:49:21 +00:00
DL6ER
2cec9eaf65 Merge pull request #3033 from pi-hole/fix/duplicates_in_adlists
Remove duplicates from adlists before importing
2019-12-12 21:37:26 +01:00
DL6ER
313f999af4 Merge pull request #3034 from pi-hole/tweak/gravity_url_displaying
Show full URL during gravity download
2019-12-12 21:37:20 +01:00
DL6ER
0b0ec43bf5 Merge pull request #3036 from pi-hole/fix/reload-lists
Improve list reloading
2019-12-12 21:37:09 +01:00
DL6ER
f0439c8d12 Add special group zero to gravity database.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 16:39:02 +00:00
DL6ER
40e8657137 Please Mr. Stickler
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 11:18:46 +00:00
DL6ER
52dd72dfa5 Ensure output is always correct and also display if domain has been found but is disabled
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 11:08:19 +00:00
DL6ER
922ce7359c pihole -q should also scan gravity table
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 10:58:41 +00:00
DL6ER
779fe670f7 Show full URL during gravity download instead of only domain and file
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 10:29:44 +00:00
DL6ER
570a7a5c11 Use sort -u instead of uniq as it is guaranteed to be safe when doing inline file operations.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 10:17:59 +00:00
DL6ER
bd1b004d94 Remove possible duplicates found in lower-quality adlists
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-12 10:13:51 +00:00
DL6ER
5457b2c6ea Merge pull request #2935 from pi-hole/new/internal-blocking
Per-client blocking changes
2019-12-12 09:49:02 +01:00
Adam Warner
02f3316710 Merge pull request #3031 from pi-hole/fix/do_not_force_local_resolver
Do not force nameserver 127.0.0.1 through resolvconf
2019-12-11 22:11:44 +00:00
DL6ER
69a909fc4c On modification of lists, we should send real-time signal 0 instead of SIGHUP. This also preserves the DNS cache of not-blocked domains.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-11 21:47:46 +00:00
Adam Warner
ec09b5843c Merge branch 'development' into fix/do_not_force_local_resolver 2019-12-11 19:09:02 +00:00
Adam Warner
078e7e1686 Merge pull request #3030 from pi-hole/fix/database-service-script
Ensure database permissions are set up correctly by the service script
2019-12-11 19:07:29 +00:00
Adam Warner
d29947ba32 optimise gravity list inserts
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-09 22:30:41 +00:00
Adam Warner
1f03faddef shell check recomends
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-09 21:35:54 +00:00
Adam Warner
d1bce7e685 Merge pull request #2995 from pi-hole/tweak/NoFurtherThanLatestTag
Don't allow repo to go further than latest tag on master
2019-12-09 20:41:29 +00:00
Dan Schaper
880352ea65 Merge pull request #3013 from Jason-Cooke/patch-2
docs: fix typo
2019-12-09 10:59:04 -08:00
DL6ER
3231e5c3ba Address stickler requests.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-09 16:52:03 +00:00
DL6ER
f482156cca Merge branch 'development' into new/internal-blocking
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-09 16:49:16 +00:00
DL6ER
620e1e9c73 Do not force nameserver 127.0.0.1 through resolvconf in pihole-FTL.service
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-09 12:23:42 +00:00
DL6ER
8a119d72e2 Ensure database permissions are set up correctly by the service script.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-09 12:17:55 +00:00
DL6ER
807a5cfb23 Merge pull request #3015 from pi-hole/tweak/domainlist_table
Unite four domain tables into a single domainlist table.
2019-12-08 16:50:22 +01:00
Adam Warner
ca7a5bc0fe Merge pull request #3024 from pi-hole/fix/3003
Get binary name in update.sh
2019-12-04 21:23:13 +00:00
DL6ER
0c5185f8ba Also display how many unique domains we have caught in the event horizon.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-04 21:02:46 +00:00
Adam Warner
eaf1244932 :dominik: Detect binary name before calling FTLcheckUpdate in update.sh
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-04 20:10:46 +00:00
Adam Warner
7c2bbf840a Merge pull request #2993 from MichaIng/patch-3
Minor installer output enhancements
2019-12-04 18:53:58 +00:00
MichaIng
85673b8273 Print name of chosen upstream DNS as well
Signed-off-by: MichaIng <micha@dietpi.com>
2019-12-04 18:59:25 +01:00
DL6ER
b6cd7b8e3d Use more descriptive names instead of directly using the IDs in list.sh
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-12-02 17:27:32 +00:00
Adam Warner
869473172c remove _ from regex descibers
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-12-01 12:50:24 +00:00
Adam Warner
63e407cfdc Update advanced/Scripts/list.sh
Co-Authored-By: DL6ER <DL6ER@users.noreply.github.com>
2019-12-01 12:45:22 +00:00
Adam Warner
0251117c77 Update advanced/Scripts/list.sh
Co-Authored-By: DL6ER <DL6ER@users.noreply.github.com>
2019-12-01 12:45:06 +00:00
Adam Warner
44e1455b12 Update advanced/Scripts/list.sh
Co-Authored-By: DL6ER <DL6ER@users.noreply.github.com>
2019-12-01 12:44:48 +00:00
Adam Warner
76460f01e9 Change the regex used for domain validation
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-30 17:45:07 +00:00
Adam Warner
4b8a72fda7 functionise parameter discovery
Rename HandleOther to ValidateDomain
Capital letters on the new functions

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-30 16:26:26 +00:00
Adam Warner
edaee4e962 remove redundant function and comments
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-30 16:02:50 +00:00
Adam Warner
77bfb3fb67 tidy up variable usage in list.sh Remove some that are redundant
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-30 14:18:12 +00:00
Adam Warner
6a881545b0 tweak wording Signed-off-by: Adam Warner <me@adamwarner.co.uk> 2019-11-30 13:25:32 +00:00
Adam Warner
d0de5fda30 Simplify removal of domain from one list when it is requested for another
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-30 13:13:26 +00:00
DL6ER
a1f120b2ff Address stickler's complaint
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-11-30 12:43:07 +00:00
DL6ER
185319d560 Unite four domain tables into a single domainlist table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-11-30 12:33:51 +00:00
DL6ER
5c6dd3f6f4 Merge pull request #2978 from Mograine/patch-1
Add commands to add/remove custom DNS records
2019-11-29 13:25:09 +01:00
Jason Cooke
8e5abc1f15 docs: fix typo 2019-11-29 13:46:05 +13:00
Adam Warner
9248c92b5c Merge pull request #2984 from diginc/development
Adding docker+arm detection & FTL download
2019-11-27 21:25:41 +00:00
Adam Warner
583ea4d17a Merge branch 'development' into development 2019-11-27 21:17:05 +00:00
Adam Warner
edcdf9f619 Merge pull request #3003 from pi-hole/fix/tbd
FTL always determined.
2019-11-27 09:46:17 +00:00
Mograine
c809c34024 Add user feedback
Signed-off-by: Mograine <ghiot.pierre@gmail.com>
2019-11-27 00:28:44 +01:00
DL6ER
037d52104a New command "pihole -g -r" recreates gravity.db based on files backed up in /etc/pihole/migration_update. This is useful to restore a working version of the database when the user destroyed the original database. Also, update gravity.db to version 5 because of a fix we needed to implement.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-11-26 10:58:39 +01:00
Adam Warner
1fb70c977c Merge pull request #3002 from pi-hole/tweak/output-format
add a double space to the beginning of some outputs
2019-11-25 19:41:58 +00:00
Adam Warner
eeb26e3975 Merge pull request #2990 from chrunchyjesus/unix-compliance
make some shebangs comply to posix standard
2019-11-16 12:26:49 +00:00
Adam Warner
12817c09bb (Squashed commits)
Always ensure we have the correct machine arch by storing to/reading from a file rather than depending on global variable that for some reason is not always populated...

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

no need for global variable

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Use a file in the temporary FTL download directory

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

Local binary variable named to l_binary. Disambiguate from global binary.

Allow 'binary' to be shadowed for testing.

Use ./ftlbinary in all operations.

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

Revert shadow ability on binary variable.

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

Remove unused tests, binary variable can not be overridden.

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>

This should work here, too

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

binary name is passed through from pihole checkout

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Add comments

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

OK, let's try it this way again

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

we might be getting somewhere.. squash after this I think!

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

This is a test to see if it fixes the aarch64 test (we are definitely squashing these commits

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

fix the rest of the tests

Signed-off-by: Adam Warner <me@adamwarner.co.uk>

Remove trailing whitespace in the files we've touched here

Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-15 19:49:09 +00:00
Adam Warner
4840bdb031 add a double space to the beginning of some outputs
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-14 19:06:23 +00:00
Adam Warner
a85e7a2a43 Merge pull request #2999 from pi-hole:fix/api_utf8_encoding
Add php-xml package as new dependency
2019-11-13 19:03:53 +00:00
Mograine
b93628acb3 Merge branch 'development' of https://github.com/Mograine/pi-hole into patch-1 2019-11-13 09:44:48 +01:00
DL6ER
7f7b9d089c Merge pull request #2965 from pi-hole/tweak/BackendChangesForAdlistComments
backend changes to allow comment when adding new adlist
2019-11-12 21:50:19 +01:00
DL6ER
61d233f069 Merge pull request #2964 from bcambl/selinux_enforcing
Exit installation when SELinux in unsupported state
2019-11-12 21:48:15 +01:00
DL6ER
d457d40e0b Add php-xml package as new dependency.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-11-12 20:49:46 +01:00
Adam Warner
6571a63ffa Add --tags to descibe command
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-11 20:36:51 +00:00
Adam Warner
a7e81c8ea0 remove extra space
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-11-11 20:12:31 +00:00
Adam Warner
73d9abae3e And finally, we please stickler
Signed-off-by: Adam Warner <adamw@rner.email>
2019-11-08 20:58:42 +00:00
Adam Warner
c8b9e42649 Please Codefactor.
Signed-off-by: Adam Warner <adamw@rner.email>
2019-11-08 19:18:35 +00:00
Adam Warner
62c00ae1d8 pushd/popd instead of juggling with a variable
Signed-off-by: Adam Warner <adamw@rner.email>
2019-11-08 19:11:55 +00:00
MichaIng
ea67c828cd Minor installer output enhancements
+ Print restart hint after setting IPv4 address on a separate line with [i] prefix to not break text alignment
+ Print final upstream DNS choice as a single printf call and by this fix missing info and linebreak on "Custom" choices.
+ Minor if/then/else code alignment

Signed-off-by: MichaIng <micha@dietpi.com>
2019-11-07 13:59:44 +01:00
chrunchyjesus
476975540a make some shebangs comply to posix standard 2019-11-05 22:33:00 +01:00
Adam Hill
3fbb0ac8dd Adding docker+arm detection & FTL download
Signed-off-by: Adam Hill <adam@diginc.us>
2019-10-29 22:26:46 -05:00
Adam Warner
71903eb27f Add in checks to reset cloned repo to the lastest available release
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-10-28 22:35:01 +00:00
Mograine
193ff38ab3 Allow more precise deletion by passing ip as parameter
Signed-off-by: Mograine <ghiot.pierre@gmail.com>
2019-10-28 13:21:05 +01:00
Pierre Ghiot
bb8dbe9da5 Update 01-pihole.conf
Signed-off-by: Mograine <ghiot.pierre@gmail.com>
2019-10-27 16:55:54 +01:00
Pierre Ghiot
f9d16c2b15 Update webpage.sh
Signed-off-by: Mograine <ghiot.pierre@gmail.com>
2019-10-27 16:55:54 +01:00
Adam Warner
29bad2fe9b Merge pull request #2963 from bcambl/fedora_pkg_check_stdout
Fix dependency check stdout on Fedora/CentOS
2019-10-16 19:57:23 +01:00
Adam Warner
f4aca3f21d Merge pull request #2966 from Asuza/minor-typo
Minor typo
2019-10-16 19:34:56 +01:00
John Krull
c6f9fe3af2 Fix spelling of the word "permitting"
Signed-off-by: John Krull <john.a.krull@gmail.com>
2019-10-15 21:29:55 -05:00
bcambl
612d408034 replace echo with printf in install_dependent_packages()
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-14 20:16:40 -06:00
bcambl
a86f578139 replace echo with printf in checkSelinux()
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-14 20:06:23 -06:00
Adam Warner
5bac1ad58b backend changes to allow comment when adding new adlist
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-10-14 22:59:58 +01:00
bcambl
cf2b021502 linting: E302 expected 2 blank lines, found 1
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-14 13:29:43 -06:00
bcambl
cd9b1fcb8c update tests for SELinux changes
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-14 13:02:44 -06:00
bcambl
81ca78e7f4 exit installer if SELinux is enforcing
The Pi-hole project does not ship a custom SELinux policy as the required policy would lower the overall system security.
Users who require SELinux to be enforcing are encouraged to create an custom policy on a case-by-case basis.

Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-14 12:25:24 -06:00
bcambl
fc0899b2ad fix fedora dependency check/install stdout
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-10-13 14:35:38 -06:00
Adam Warner
2e138eb99f Merge pull request #2954 from pi-hole/reetP-Patch
Update pihole
2019-10-06 19:10:05 +01:00
John Crisp
4f21f67775 Update pihole
Fix spelling typos
2019-10-06 15:09:14 +01:00
DL6ER
d883854aad Use constant for long path.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-10-03 12:12:32 +02:00
DL6ER
756c99653e Merge branch 'development' into new/internal-blocking 2019-10-03 12:01:27 +02:00
Mark Drobnak
3269c63f89 Merge pull request #2948 from pi-hole/fix/restart_lighttpd
Do not create empty regex.list file
2019-09-29 12:27:25 -04:00
DL6ER
149fb0c216 Do not install a blank regex file.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-29 18:03:37 +02:00
DL6ER
d244a018d0 Merge pull request #2944 from pi-hole/fix/vw_gravity_creation_v1
Fix gravity database table creation order
2019-09-26 14:27:05 +02:00
DL6ER
2e0370367c Print when we upgrade gravity database version. This will make possibly failed upgrades easier to debug.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-26 14:02:20 +02:00
DL6ER
3cb4f6d9d4 We cannot create vw_gravity before having created vw_whitelist as the former depends onthe later. This commit changes the order in which the tables are created.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-26 13:50:54 +02:00
Mark Drobnak
ae3b8be4d4 Merge pull request #2938 from pi-hole/release/v4.3.2
Release v4.3.2 merge to development for update.
2019-09-21 20:40:38 -04:00
Adam Warner
61a40c1b43 merge devel into 4.3.2 And Resolve merge conflicts
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-09-22 01:16:44 +01:00
DL6ER
a27c7b1398 regex white- and blacklist views need to be re-created as well as we need the ID for storing internally whether or not we try to match a given regex for a specific client.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-18 20:58:44 +02:00
DL6ER
a71f35d263 Merge pull request #2932 from pi-hole/fix/no-backup-no-error
Fix cross where there is no error
2019-09-17 23:51:46 +02:00
Adam Warner
9a6deb5a1a Fix tests
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-09-17 21:16:49 +01:00
DL6ER
f582344b9a "No default index.lighttpd.html file found... not backing up" is not an error.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-17 21:59:48 +02:00
Dan Schaper
e41c4b5bb6 Merge pull request #2881 from pi-hole/release/v4.3.2
Pi-hole Core v4.3.2
2019-09-15 08:52:21 -07:00
DL6ER
7b48431917 Add client_by_group table like we have for the other lists. It stores associations between individual clients and list groups.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-09 00:03:57 +02:00
Mark Drobnak
847c4f26aa Merge pull request #2916 from pi-hole/fix/disable-firefox-doh
Improve #2915
2019-09-07 17:58:02 -04:00
DL6ER
1f36ec48e3 Add use-application-dns.net = NXDOMAIN in ProcessDNSSettings rather than in the template so we can ensure that it will survive config-renewals.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-07 23:11:20 +02:00
DL6ER
ff08add7c0 Update vw_whitelist and vw_blacklist to return group_id alongside domain so we can filter if the current client wants to get this domain blocked or not.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-07 13:01:36 +02:00
DL6ER
b4131ae817 Merge pull request #2915 from pi-hole/new/disable-firefox-doh
Prevent Firefox from automatically switching over to DNS-over-HTTPS
2019-09-07 12:24:01 +02:00
DL6ER
ffc91a6c81 Update view vw_gravity to only return domains from enabled adlists.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-07 11:17:53 +02:00
DL6ER
525ec8cd01 Signal to Firefox that the local network is unsuitable for DNS-over-HTTPS
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-07 08:44:03 +02:00
Dan Schaper
b209629579 Merge pull request #2909 from pi-hole/fix/domains_in_comment
Print messages only after removing possible matches in comments
2019-09-05 21:30:34 -07:00
DL6ER
a8af2e1837 Store domains without sorting and unifying them first. This allows us to preserve the relationship of the individual domains to the lists they came from.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-04 23:14:29 +02:00
Mark Drobnak
93ecc046ea Merge pull request #2912 from pi-hole/tweak/RemoveAdblockSupport
Remove adblock list style support
2019-09-03 20:51:09 -04:00
Adam Warner
8bef5dc805 remove n from -ne
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-09-03 23:56:23 +01:00
Adam Warner
ad41bcca5a Remove support for adblock style lists to prevent false positives
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-09-03 23:43:11 +01:00
DL6ER
aed2e35bc0 Print messages only after removing possible matches in comments.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-02 22:39:28 +02:00
DL6ER
ab90ff565a Merge pull request #2903 from pi-hole/tweak/store-gravity-timestamp
Store timestamp when the gravity table was last updated successfully
2019-09-01 19:05:46 +02:00
DL6ER
ca8982494b Store timestamp when the gravity table was last updated successfully. This fixes https://github.com/pi-hole/AdminLTE/issues/989
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-09-01 14:42:07 +02:00
DL6ER
a7b44426cd Merge pull request #2838 from pi-hole/new/whitelist-regex-support
Whitelist regex support
2019-09-01 14:23:37 +02:00
Mark Drobnak
e7af42a9f8 Merge pull request #2897 from pi-hole/fix/readOnlyBin
Remove readonly attribute of the PI_HOLE_BIN_DIR declaration in pihole
2019-08-31 14:07:58 -04:00
Dan Schaper
b9fed8fca6 Merge pull request #2891 from niklasea/development
Restore 'pihole -q' hosts format support and improve matching in edge cases
2019-08-30 20:56:52 -07:00
Adam Warner
79b8dac0fa Remove readonly attribute of the PI_HOLE_BIN_DIR declaration in pihole
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2019-08-30 22:06:14 +01:00
DL6ER
d8eee47ca4 Add dhcp-ignore-names option when enabling DHCP service. We currently remove anything that starts with "dhcp-" to have a clean configuration and removed these lines without noticing when enabling the DHCP server.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-28 22:10:26 -07:00
Niklas Elmose Andersen
a3e32d9a15 Properly escape domain regex
Dots in domain names should not match any character.

Signed-off-by: Niklas Elmose Andersen <mail@niklasea.dk>
2019-08-27 12:13:28 +02:00
Niklas Elmose Andersen
989d1aff60 Restore and improve 'pihole -q' matching
Removes regex lookaround which 'grep -E' does not support.
Restores support for blocklists in hosts format.
Simplifies domain match cleanup logic by eliminating an if-condition.
Improves domain matching by eliminating commented domain names,
eliminating false positives in a few edge cases.

Signed-off-by: Niklas Elmose Andersen <mail@niklasea.dk>
2019-08-26 20:39:56 +02:00
Mark Drobnak
95b2560a08 Merge pull request #2874 from snapsl/tweak/webpage-shellcheck
tweaked webpage.sh
2019-08-26 10:56:38 -04:00
Dan Schaper
76133074d1 Merge pull request #2886 from pi-hole/fix/fullpath
Call 'pihole' with full path in install and cronjobs.
2019-08-25 15:38:19 -07:00
Dan Schaper
4cfe463dfa Add back dropped binary call.
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-08-24 04:57:23 -07:00
Dan Schaper
03c65dd0e9 Convert hardcoded /usr/local/bin to variable
Update pihole script with full path to 'pihole'

Variable for webpage.sh 'pihole' call.

Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-08-24 04:49:14 -07:00
DL6ER
6faddfcd3d Print timestamps in local time zone of the Pi-hole.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-23 10:09:52 +02:00
DL6ER
1820c2c598 Merge branch 'development' into new/whitelist-regex-support
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 14:19:51 +02:00
DL6ER
23b688287f Fix indentation in query.sh. No functional change in this commit.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 14:12:58 +02:00
DL6ER
42ccc1ef24 Add support for regex whitelist in "pihole -q".
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 14:06:42 +02:00
DL6ER
aef7892de6 Add missing hyphens.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 13:57:01 +02:00
DL6ER
cc40c18f49 Wrap upgrade script commands in a transaction.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 13:54:46 +02:00
DL6ER
b1838512b2 Explicitly select columns (and their order) when listing the databaes tables. Print timestamps translated to strings instead of printing the integer timestamps.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-22 13:39:58 +02:00
Dan Schaper
1c50caa8ca Merge pull request #2882 from pi-hole/fix/FTL-latest-tag-not-found
Fix error when getting latest FTL tag
2019-08-21 15:31:04 -07:00
Mcat12
febdbceab1 Fix error when getting latest FTL tag
The headers containing the latest FTL tag were not properly input to the
command (`<` vs `<<<`). This caused Bash to try and open the file named
after the header string, which does not exist.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 10:03:54 -04:00
Mark Drobnak
597b4bfcca Merge pull request #2873 from pi-hole/fix/pihole_restartdns
Simplify restarting code for "pihole restartdns"
2019-08-21 08:57:44 -04:00
Dan Schaper
5c65006a66 Merge base branch changes
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-08-21 05:20:10 -07:00
David Haguenauer
34727c00c6 Drop indirection from install_dependent_packages
Previously, install_dependent_packages would receive an array variable
name as its single parameter, and would use variable indirection to
access it; this change simplifies that function so that it instead
receives the expanded array.

Signed-off-by: David Haguenauer <ml@kurokatta.org>
2019-08-21 04:41:45 -07:00
Andreas Kurth
352146ef92 Fix pihole manpage to match code.
The dry-run argument to pihole -up is "--check-only", not "--checkonly".

Signed-off-by: Andreas Kurth <github@akurth.de>
2019-08-21 04:39:13 -07:00
Mcat12
b107ae2ab9 Use the filtered IPv6 OpenDNS servers
The ones we were using previously were not filtered. See
https://support.opendns.com/hc/en-us/articles/227986667-Does-OpenDNS-Support-IPv6-

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:39:10 -07:00
Mcat12
d5d1a607ad Fix PKG_REMOVE array usage
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:39:07 -07:00
Mcat12
2594164772 Use an array for PKG_REMOVE
Fixes shellcheck warning.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:39:05 -07:00
Mcat12
209555c42e Fix uninstall causing 403 errors and not removing packages
The 403 lighttpd errors were caused by removing the lighttpd config
directory and not removing lighttpd itself. This caused a subsequent
Pi-hole reinstall to not have all of the required lighttpd config files.

The error while removing packages was caused by combining arguments into
a string instead of listing each argument.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:39:01 -07:00
DL6ER
e27f50b8e5 Try to obtain PID from PIDFILE. If this fails (file does not exist or is empty), fall back to using pidof + awk
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-21 04:38:58 -07:00
DL6ER
484f618685 Use last PID in case pidof returns multiple PIDs for pihole-FTL
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-21 04:38:55 -07:00
Mcat12
da398c3d9c Print an error message if the FTL release metadata download fails
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:38:51 -07:00
Mcat12
4e0ad52001 Fix ShellCheck issue by refactoring a bit
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:38:24 -07:00
Mcat12
c9829dd3e4 Fix pihole -up showing FTL update when network is down
Fixes #1877

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-21 04:38:04 -07:00
Dan Schaper
35cf863f4b Create FUNDING.yml
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-08-21 04:38:01 -07:00
Andreas
c53be459c6 quick fix for when dig also returns a CNAME
Signed-off-by: Andreas <ryrun@online.de>
2019-08-21 04:37:06 -07:00
B. Olausson
ab1ea5a366 This change fixes issue #145 "stty: standard input: Inappropriate ioctl for device ".It checks if a real terminal exist, if not it sets the screen size to a fixed value. This helps to avoid nasty and unnecessary logs when running "pihole -up" via e.g. cron.
Signed-off-by: B. Olausson <contactme@olausson.de>
2019-08-20 14:48:03 -07:00
bcambl
97e11bd94e ensure installation dependencies for FTL tests which rely on /etc/init.d
Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-08-20 11:42:12 -07:00
bcambl
10de7f649b add chkconfig to INSTALLER_DEPS (CentOS/Fedora)
chkconfig is a dependency of spawn-fcgi which is a dependency of lighttpd which is installed via PIHOLE_WEB_DEPS in phase 2
adding chkconfig to INSTALLER_DEPS to ensure /etc/init.d is present during the installation prompts (phase 1)

Signed-off-by: bcambl <blayne@blaynecampbell.com>
2019-08-20 11:42:09 -07:00
bcambl
d793ef1ab8 Merge conflict Fedora Dockerfile for tests pinned to 30
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-08-20 11:40:18 -07:00
Jeroen Baert
d3d45a8776 Fix 404 error when browsing to pi.hole (without /admin) (for fedora)
Signed-off-by: Jeroen Baert <3607063+Forceflow@users.noreply.github.com>
2019-08-20 11:29:20 -07:00
Jeroen Baert
9f86fd0cb4 Fix for 404 error when browsing to pi.hole (without /admin)
Signed-off-by: Jeroen Baert <3607063+Forceflow@users.noreply.github.com>
2019-08-20 11:29:17 -07:00
Mcat12
71d5b42726 Remove the ZeusTracker blocklist from the defaults
It is no longer served. Fixes #2843.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-08-20 11:05:06 -07:00
DL6ER
3e78ed95d4 Fix displaying options for table "group" in the debugger.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-17 15:04:04 +02:00
snapsl
20a839fef5 fixed local declaration before assignment
Signed-off-by: snapsl <chris.baller@gmx.de>
2019-08-15 11:20:55 +02:00
snapsl
b2d8c4374b tweaked code style of webpage.sh
Signed-off-by: snapsl <chris.baller@gmx.de>
2019-08-14 23:28:13 +02:00
DL6ER
251c9fee98 Simplify restarting code for "pihole restartdns". This fixes #2869.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-14 21:14:55 +02:00
Mark Drobnak
9f77810ca8 Merge pull request #2774 from pi-hole/meta/funding
Create FUNDING.yml
2019-08-12 10:30:33 -04:00
DL6ER
dc93462d42 Group table has only two columns
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-06 20:28:00 +02:00
DL6ER
4371c9ba03 Ensure proper permissions are set for gravity.db after creation.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 21:20:07 +02:00
DL6ER
6e2e825a5f Rename options "pihole --whiteregex" to "pihole --white-regex" for the sake of readability. The same applied for "whitewild" -> "white-wild"
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 21:10:52 +02:00
DL6ER
af754e3fc4 Rearrange group tables directly next to the tables they refer to.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 21:08:36 +02:00
DL6ER
06860ed5b4 Group tables have only two columns.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 21:07:39 +02:00
DL6ER
09190c1735 Only check once for if this is a regex list or not.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 21:03:47 +02:00
DL6ER
a95b473417 Rearranage if statements to ensure the proper output is shown for wildcard-style filters.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-08-05 20:56:01 +02:00
Mark Drobnak
56e3565a9e Merge pull request #2865 from ryrun/patch-1
quick fix for when dig also returns a CNAME
2019-08-05 10:58:27 -04:00
Andreas
63230cb72d quick fix for when dig also returns a CNAME
Signed-off-by: ryrun <ryrun@online.de>
2019-08-04 21:21:08 +02:00
Mark Drobnak
f81e57d5b8 Merge pull request #2862 from bolausson/mybranch
Check if TTY exist before we get screen size - Second PR try
2019-07-30 14:58:56 -04:00
B. Olausson
ecd6817aaf This change fixes issue #145 "stty: standard input: Inappropriate ioctl for device ".It checks if a real terminal exist, if not it sets the screen size to a fixed value. This helps to avoid nasty and unnecessary logs when running "pihole -up" via e.g. cron.
Signed-off-by: B. Olausson <contactme@olausson.de>
2019-07-29 19:48:56 +01:00
DL6ER
6f58d58cae Add --whitewild to help texts and man pages.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-22 22:26:27 +02:00
DL6ER
40d0caa70b Add undocumented --whitewild option that does the same --wild does for the whitelist.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-22 21:15:28 +02:00
DL6ER
0692be9bae Fix small mistake in 2->3 upgrade script.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-22 20:59:52 +02:00
DL6ER
0d28dce326 Print group table contents in debug log.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-22 20:18:15 +02:00
DL6ER
96031214c6 Add support for whitelist regex filter management via CLI.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-22 19:36:11 +02:00
Mark Drobnak
3420439f31 Merge pull request #2820 from pi-hole/fix/ftl-update-no-network
Fix pihole -up showing FTL update when network is down
2019-07-20 14:55:48 -04:00
Mark Drobnak
ab3f6dfcc6 Merge pull request #2831 from pi-hole/fix/block-page-adlists
Fix block page errors due to gravity DB and changes to queryAds
2019-07-20 14:55:37 -04:00
Mcat12
3ebd43ebf0 Remove outdated adlists.list check and fix empty adlists error message
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-19 17:39:00 -07:00
Mcat12
38ff343134 Print an error message if the FTL release metadata download fails
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-19 17:35:21 -07:00
Mark Drobnak
6a8d3100d2 Merge pull request #2846 from pi-hole/fix/zeus-dead-adlist
Remove the ZeusTracker blocklist from the defaults
2019-07-18 13:43:57 -04:00
Mcat12
c3ec2e68ad Remove the ZeusTracker blocklist from the defaults
It is no longer served. Fixes #2843.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-12 20:03:36 -07:00
Mark Drobnak
bfe714e985 Merge pull request #2840 from pi-hole/fix/valid_ip-quote-error
Fix error when checking if IP address is valid
2019-07-11 23:06:39 -04:00
Mcat12
1d5755a4c2 Add tests for valid_ip
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-10 21:18:58 -07:00
Mark Drobnak
445127accc Merge pull request #2832 from pi-hole/new/audit_database
Migrate audit list to gravity.db database table
2019-07-10 22:55:48 -04:00
Mcat12
c156af020c Use suggested array creation to fix linter error
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-10 19:52:17 -07:00
Mark Drobnak
fa8751f9ad Fix error when checking if IP address is valid
During install in `valid_ip`, we split up the IP address into octets to verify it is valid (each is <= 255).

This validation was broken in #2743 when a variable usage was quoted where it should have stayed unquoted:
```
./automated install/basic-install.sh: line 942: [[: 192.241.211.120: syntax error: invalid arithmetic operator (error token is ".241.211.120")
```

Due to this error, `127.0.0.1` would be used instead of the requested IP address. Also, this prevented the user from entering a custom DNS server as it would be marked as an invalid IP address.

Signed-off-by: Mark Drobnak <mark.drobnak@gmail.com>
2019-07-10 19:42:51 -07:00
DL6ER
420f60b5c7 Add timeout to migration script (1->2).
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-10 12:02:07 +02:00
DL6ER
65fdbc85d5 Add timeout to migration script (2->3).
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-10 12:01:38 +02:00
DL6ER
87f75c737a Review comments.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-10 12:00:38 +02:00
DL6ER
5ff9052200 Review comments
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-09 11:41:44 +02:00
DL6ER
9641e268ea Merge pull request #2837 from pi-hole/fix/debug-use-FTL-file-locations
Get file locations of FTL files from the config
2019-07-09 07:54:57 +02:00
Mcat12
b154dd5f07 Quote calls to read FTL config
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-08 19:48:50 -07:00
DL6ER
0683842ec3 Fix typo in 2->3 migration script.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-08 21:43:49 +02:00
DL6ER
f5121c64be We should still add the regex lines (initially) to the regex table as the renaming will happen only after the importing.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-08 21:39:30 +02:00
DL6ER
054c7a2c05 Create new table + view regex_whitelist + rename old regex table to regex_blacklist. This updates the gravity.db version to 3.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-08 21:35:31 +02:00
DL6ER
3d3fc2947e Review comments
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-08 19:22:35 +02:00
Mcat12
e8e5d4afda Get file locations of FTL files from the config
Instead of hardcoding the location of certain FTL files (`gravity.db`,
`pihole-FTL.log`), read the configured location from FTL's config. The
default location is used if no custom location has been configured.

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-07 18:10:39 -07:00
DL6ER
8382f4d727 Rename table to domain_audit and simplify subroutine addAudit().
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-07 21:21:56 +02:00
DL6ER
be3e198f9a Address linting errors.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-07 10:46:20 +02:00
DL6ER
acc50b709e Only migrate files once (domain and adlist lists druing initial creation of gravity.db auditlog.list on database upgrade from version 1 to 2.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-07 10:33:08 +02:00
DL6ER
efe8216445 Fix further stickler complaint.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-06 09:45:07 +02:00
DL6ER
0405aaa3da Review comments and fixing stickler complaints.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-06 09:32:41 +02:00
DL6ER
2fb4256f84 Rename table to "auditlist"
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 16:28:36 +02:00
DL6ER
82476138c1 Instead of calling sqlite3 multiple times within a loop, we use the ability to add multiple rows within one INSERT clause. This is supported since sqlite3 3.7.11 (2012-03-20) and should be available on all systems.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 16:09:13 +02:00
DL6ER
5293beeb77 Update audit script to store domains in new database table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 14:10:33 +02:00
DL6ER
0c8f5f1221 Remove comment field from audit table
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 14:06:05 +02:00
DL6ER
4f4a12bb40 Upgrade database if necessary and store audit domains therein.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 14:03:57 +02:00
DL6ER
1dbe6c83c3 Add database upgrading mechanism for adding the audit table.
Signed-off-by: DL6ER <dl6er@dl6er.de>
2019-07-05 13:54:18 +02:00
Mcat12
2b5033e732 Add missing spaces found by linter
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-04 13:49:39 -07:00
Mcat12
8d9ff550d4 Fix blockpage error if whitelisted, blacklisted, or regex filtered
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-04 13:44:14 -07:00
Mcat12
f1733f9c5d Fetch adlists for the block page from gravity.db
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-07-04 13:11:46 -07:00
Mark Drobnak
1a741f696e Merge pull request #2816 from RamSet/hotfix/lighttpdMime
Fix lighttpd mime
2019-06-29 15:45:33 -04:00
Mcat12
37e7cd5211 Fix ShellCheck issue by refactoring a bit
Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-06-28 21:19:07 -07:00
Mcat12
91a2d052a7 Fix pihole -up showing FTL update when network is down
Fixes #1877

Signed-off-by: Mcat12 <newtoncat12@yahoo.com>
2019-06-28 20:49:56 -07:00
Dan Schaper
a09f92f9cc Create FUNDING.yml
Signed-off-by: Dan Schaper <dan.schaper@pi-hole.net>
2019-05-31 22:12:54 -07:00
34 changed files with 1835 additions and 1061 deletions

View File

@@ -175,7 +175,7 @@ While quite outdated at this point, [this original blog post about Pi-hole](http
----- -----
## Coverage ## Coverage
- [Lifehacker: Turn A Raspberry Pi Into An Ad Blocker With A Single Command](https://www.lifehacker.com.au/2015/02/turn-a-raspberry-pi-into-an-ad-blocker-with-a-single-command/) (Feburary, 2015) - [Lifehacker: Turn A Raspberry Pi Into An Ad Blocker With A Single Command](https://www.lifehacker.com.au/2015/02/turn-a-raspberry-pi-into-an-ad-blocker-with-a-single-command/) (February, 2015)
- [MakeUseOf: Adblock Everywhere: The Raspberry Pi-Hole Way](http://www.makeuseof.com/tag/adblock-everywhere-raspberry-pi-hole-way/) (March, 2015) - [MakeUseOf: Adblock Everywhere: The Raspberry Pi-Hole Way](http://www.makeuseof.com/tag/adblock-everywhere-raspberry-pi-hole-way/) (March, 2015)
- [Catchpoint: Ad-Blocking on Apple iOS9: Valuing the End User Experience](http://blog.catchpoint.com/2015/09/14/ad-blocking-apple/) (September, 2015) - [Catchpoint: Ad-Blocking on Apple iOS9: Valuing the End User Experience](http://blog.catchpoint.com/2015/09/14/ad-blocking-apple/) (September, 2015)
- [Security Now Netcast: Pi-hole](https://www.youtube.com/watch?v=p7-osq_y8i8&t=100m26s) (October, 2015) - [Security Now Netcast: Pi-hole](https://www.youtube.com/watch?v=p7-osq_y8i8&t=100m26s) (October, 2015)

View File

@@ -19,6 +19,7 @@
############################################################################### ###############################################################################
addn-hosts=/etc/pihole/local.list addn-hosts=/etc/pihole/local.list
addn-hosts=/etc/pihole/custom.list
domain-needed domain-needed

View File

@@ -0,0 +1,113 @@
#!/usr/bin/env bash
# shellcheck disable=SC1090
# Pi-hole: A black hole for Internet advertisements
# (c) 2019 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware.
#
# Updates gravity.db database
#
# This file is copyright under the latest version of the EUPL.
# Please see LICENSE file for your rights under this license.
readonly scriptPath="/etc/.pihole/advanced/Scripts/database_migration/gravity"
upgrade_gravityDB(){
local database piholeDir auditFile version
database="${1}"
piholeDir="${2}"
auditFile="${piholeDir}/auditlog.list"
# Get database version
version="$(sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")"
if [[ "$version" == "1" ]]; then
# This migration script upgrades the gravity.db file by
# adding the domain_audit table
echo -e " ${INFO} Upgrading gravity database from version 1 to 2"
sqlite3 "${database}" < "${scriptPath}/1_to_2.sql"
version=2
# Store audit domains in database table
if [ -e "${auditFile}" ]; then
echo -e " ${INFO} Migrating content of ${auditFile} into new database"
# database_table_from_file is defined in gravity.sh
database_table_from_file "domain_audit" "${auditFile}"
fi
fi
if [[ "$version" == "2" ]]; then
# This migration script upgrades the gravity.db file by
# renaming the regex table to regex_blacklist, and
# creating a new regex_whitelist table + corresponding linking table and views
echo -e " ${INFO} Upgrading gravity database from version 2 to 3"
sqlite3 "${database}" < "${scriptPath}/2_to_3.sql"
version=3
fi
if [[ "$version" == "3" ]]; then
# This migration script unifies the formally separated domain
# lists into a single table with a UNIQUE domain constraint
echo -e " ${INFO} Upgrading gravity database from version 3 to 4"
sqlite3 "${database}" < "${scriptPath}/3_to_4.sql"
version=4
fi
if [[ "$version" == "4" ]]; then
# This migration script upgrades the gravity and list views
# implementing necessary changes for per-client blocking
echo -e " ${INFO} Upgrading gravity database from version 4 to 5"
sqlite3 "${database}" < "${scriptPath}/4_to_5.sql"
version=5
fi
if [[ "$version" == "5" ]]; then
# This migration script upgrades the adlist view
# to return an ID used in gravity.sh
echo -e " ${INFO} Upgrading gravity database from version 5 to 6"
sqlite3 "${database}" < "${scriptPath}/5_to_6.sql"
version=6
fi
if [[ "$version" == "6" ]]; then
# This migration script adds a special group with ID 0
# which is automatically associated to all clients not
# having their own group assignments
echo -e " ${INFO} Upgrading gravity database from version 6 to 7"
sqlite3 "${database}" < "${scriptPath}/6_to_7.sql"
version=7
fi
if [[ "$version" == "7" ]]; then
# This migration script recreated the group table
# to ensure uniqueness on the group name
# We also add date_added and date_modified columns
echo -e " ${INFO} Upgrading gravity database from version 7 to 8"
sqlite3 "${database}" < "${scriptPath}/7_to_8.sql"
version=8
fi
if [[ "$version" == "8" ]]; then
# This migration fixes some issues that were introduced
# in the previous migration script.
echo -e " ${INFO} Upgrading gravity database from version 8 to 9"
sqlite3 "${database}" < "${scriptPath}/8_to_9.sql"
version=9
fi
if [[ "$version" == "9" ]]; then
# This migration drops unused tables and creates triggers to remove
# obsolete groups assignments when the linked items are deleted
echo -e " ${INFO} Upgrading gravity database from version 9 to 10"
sqlite3 "${database}" < "${scriptPath}/9_to_10.sql"
version=10
fi
if [[ "$version" == "10" ]]; then
# This adds timestamp and an optional comment field to the client table
# These fields are only temporary and will be replaces by the columns
# defined in gravity.db.sql during gravity swapping. We add them here
# to keep the copying process generic (needs the same columns in both the
# source and the destination databases).
echo -e " ${INFO} Upgrading gravity database from version 10 to 11"
sqlite3 "${database}" < "${scriptPath}/10_to_11.sql"
version=11
fi
if [[ "$version" == "11" ]]; then
# Rename group 0 from "Unassociated" to "Default"
echo -e " ${INFO} Upgrading gravity database from version 11 to 12"
sqlite3 "${database}" < "${scriptPath}/11_to_12.sql"
version=12
fi
}

View File

@@ -0,0 +1,16 @@
.timeout 30000
BEGIN TRANSACTION;
ALTER TABLE client ADD COLUMN date_added INTEGER;
ALTER TABLE client ADD COLUMN date_modified INTEGER;
ALTER TABLE client ADD COLUMN comment TEXT;
CREATE TRIGGER tr_client_update AFTER UPDATE ON client
BEGIN
UPDATE client SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
END;
UPDATE info SET value = 11 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,19 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
UPDATE "group" SET name = 'Default' WHERE id = 0;
UPDATE "group" SET description = 'The default group' WHERE id = 0;
DROP TRIGGER IF EXISTS tr_group_zero;
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
BEGIN
INSERT OR IGNORE INTO "group" (id,enabled,name,description) VALUES (0,1,'Default','The default group');
END;
UPDATE info SET value = 12 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,14 @@
.timeout 30000
BEGIN TRANSACTION;
CREATE TABLE domain_audit
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT UNIQUE NOT NULL,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int))
);
UPDATE info SET value = 2 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,65 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
ALTER TABLE regex RENAME TO regex_blacklist;
CREATE TABLE regex_blacklist_by_group
(
regex_blacklist_id INTEGER NOT NULL REFERENCES regex_blacklist (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (regex_blacklist_id, group_id)
);
INSERT INTO regex_blacklist_by_group SELECT * FROM regex_by_group;
DROP TABLE regex_by_group;
DROP VIEW vw_regex;
DROP TRIGGER tr_regex_update;
CREATE VIEW vw_regex_blacklist AS SELECT DISTINCT domain
FROM regex_blacklist
LEFT JOIN regex_blacklist_by_group ON regex_blacklist_by_group.regex_blacklist_id = regex_blacklist.id
LEFT JOIN "group" ON "group".id = regex_blacklist_by_group.group_id
WHERE regex_blacklist.enabled = 1 AND (regex_blacklist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY regex_blacklist.id;
CREATE TRIGGER tr_regex_blacklist_update AFTER UPDATE ON regex_blacklist
BEGIN
UPDATE regex_blacklist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
CREATE TABLE regex_whitelist
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT UNIQUE NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT 1,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
comment TEXT
);
CREATE TABLE regex_whitelist_by_group
(
regex_whitelist_id INTEGER NOT NULL REFERENCES regex_whitelist (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (regex_whitelist_id, group_id)
);
CREATE VIEW vw_regex_whitelist AS SELECT DISTINCT domain
FROM regex_whitelist
LEFT JOIN regex_whitelist_by_group ON regex_whitelist_by_group.regex_whitelist_id = regex_whitelist.id
LEFT JOIN "group" ON "group".id = regex_whitelist_by_group.group_id
WHERE regex_whitelist.enabled = 1 AND (regex_whitelist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY regex_whitelist.id;
CREATE TRIGGER tr_regex_whitelist_update AFTER UPDATE ON regex_whitelist
BEGIN
UPDATE regex_whitelist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
UPDATE info SET value = 3 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,96 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
CREATE TABLE domainlist
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
type INTEGER NOT NULL DEFAULT 0,
domain TEXT UNIQUE NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT 1,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
comment TEXT
);
ALTER TABLE whitelist ADD COLUMN type INTEGER;
UPDATE whitelist SET type = 0;
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
SELECT type,domain,enabled,date_added,date_modified,comment FROM whitelist;
ALTER TABLE blacklist ADD COLUMN type INTEGER;
UPDATE blacklist SET type = 1;
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
SELECT type,domain,enabled,date_added,date_modified,comment FROM blacklist;
ALTER TABLE regex_whitelist ADD COLUMN type INTEGER;
UPDATE regex_whitelist SET type = 2;
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
SELECT type,domain,enabled,date_added,date_modified,comment FROM regex_whitelist;
ALTER TABLE regex_blacklist ADD COLUMN type INTEGER;
UPDATE regex_blacklist SET type = 3;
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
SELECT type,domain,enabled,date_added,date_modified,comment FROM regex_blacklist;
DROP TABLE whitelist_by_group;
DROP TABLE blacklist_by_group;
DROP TABLE regex_whitelist_by_group;
DROP TABLE regex_blacklist_by_group;
CREATE TABLE domainlist_by_group
(
domainlist_id INTEGER NOT NULL REFERENCES domainlist (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (domainlist_id, group_id)
);
DROP TRIGGER tr_whitelist_update;
DROP TRIGGER tr_blacklist_update;
DROP TRIGGER tr_regex_whitelist_update;
DROP TRIGGER tr_regex_blacklist_update;
CREATE TRIGGER tr_domainlist_update AFTER UPDATE ON domainlist
BEGIN
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
DROP VIEW vw_whitelist;
CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 0
ORDER BY domainlist.id;
DROP VIEW vw_blacklist;
CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 1
ORDER BY domainlist.id;
DROP VIEW vw_regex_whitelist;
CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 2
ORDER BY domainlist.id;
DROP VIEW vw_regex_blacklist;
CREATE VIEW vw_regex_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 3
ORDER BY domainlist.id;
UPDATE info SET value = 4 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,38 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
DROP TABLE gravity;
CREATE TABLE gravity
(
domain TEXT NOT NULL,
adlist_id INTEGER NOT NULL REFERENCES adlist (id),
PRIMARY KEY(domain, adlist_id)
);
DROP VIEW vw_gravity;
CREATE VIEW vw_gravity AS SELECT domain, adlist_by_group.group_id AS group_id
FROM gravity
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = gravity.adlist_id
LEFT JOIN adlist ON adlist.id = gravity.adlist_id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1);
CREATE TABLE client
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
ip TEXT NOL NULL UNIQUE
);
CREATE TABLE client_by_group
(
client_id INTEGER NOT NULL REFERENCES client (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (client_id, group_id)
);
UPDATE info SET value = 5 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,18 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
DROP VIEW vw_adlist;
CREATE VIEW vw_adlist AS SELECT DISTINCT address, adlist.id AS id
FROM adlist
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY adlist.id;
UPDATE info SET value = 6 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,35 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
INSERT OR REPLACE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
INSERT INTO domainlist_by_group (domainlist_id, group_id) SELECT id, 0 FROM domainlist;
INSERT INTO client_by_group (client_id, group_id) SELECT id, 0 FROM client;
INSERT INTO adlist_by_group (adlist_id, group_id) SELECT id, 0 FROM adlist;
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
BEGIN
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_client_add AFTER INSERT ON client
BEGIN
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
BEGIN
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
BEGIN
INSERT OR REPLACE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
END;
UPDATE info SET value = 7 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,35 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
ALTER TABLE "group" RENAME TO "group__";
CREATE TABLE "group"
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
enabled BOOLEAN NOT NULL DEFAULT 1,
name TEXT UNIQUE NOT NULL,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
description TEXT
);
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
BEGIN
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
END;
INSERT OR IGNORE INTO "group" (id,enabled,name,description) SELECT id,enabled,name,description FROM "group__";
DROP TABLE "group__";
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
BEGIN
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
END;
UPDATE info SET value = 8 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,27 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
DROP TRIGGER IF EXISTS tr_group_update;
DROP TRIGGER IF EXISTS tr_group_zero;
PRAGMA legacy_alter_table=ON;
ALTER TABLE "group" RENAME TO "group__";
PRAGMA legacy_alter_table=OFF;
ALTER TABLE "group__" RENAME TO "group";
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
BEGIN
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
END;
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
BEGIN
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
END;
UPDATE info SET value = 9 WHERE property = 'version';
COMMIT;

View File

@@ -0,0 +1,29 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
DROP TABLE IF EXISTS whitelist;
DROP TABLE IF EXISTS blacklist;
DROP TABLE IF EXISTS regex_whitelist;
DROP TABLE IF EXISTS regex_blacklist;
CREATE TRIGGER tr_domainlist_delete AFTER DELETE ON domainlist
BEGIN
DELETE FROM domainlist_by_group WHERE domainlist_id = OLD.id;
END;
CREATE TRIGGER tr_adlist_delete AFTER DELETE ON adlist
BEGIN
DELETE FROM adlist_by_group WHERE adlist_id = OLD.id;
END;
CREATE TRIGGER tr_client_delete AFTER DELETE ON client
BEGIN
DELETE FROM client_by_group WHERE client_id = OLD.id;
END;
UPDATE info SET value = 10 WHERE property = 'version';
COMMIT;

View File

@@ -21,58 +21,77 @@ web=false
domList=() domList=()
listType="" typeId=""
listname="" comment=""
declare -i domaincount
domaincount=0
colfile="/opt/pihole/COL_TABLE" colfile="/opt/pihole/COL_TABLE"
source ${colfile} source ${colfile}
# IDs are hard-wired to domain interpretation in the gravity database scheme
# Clients (including FTL) will read them through the corresponding views
readonly whitelist="0"
readonly blacklist="1"
readonly regex_whitelist="2"
readonly regex_blacklist="3"
GetListnameFromTypeId() {
if [[ "$1" == "${whitelist}" ]]; then
echo "whitelist"
elif [[ "$1" == "${blacklist}" ]]; then
echo "blacklist"
elif [[ "$1" == "${regex_whitelist}" ]]; then
echo "regex whitelist"
elif [[ "$1" == "${regex_blacklist}" ]]; then
echo "regex blacklist"
fi
}
GetListParamFromTypeId() {
if [[ "${typeId}" == "${whitelist}" ]]; then
echo "w"
elif [[ "${typeId}" == "${blacklist}" ]]; then
echo "b"
elif [[ "${typeId}" == "${regex_whitelist}" && "${wildcard}" == true ]]; then
echo "-white-wild"
elif [[ "${typeId}" == "${regex_whitelist}" ]]; then
echo "-white-regex"
elif [[ "${typeId}" == "${regex_blacklist}" && "${wildcard}" == true ]]; then
echo "-wild"
elif [[ "${typeId}" == "${regex_blacklist}" ]]; then
echo "-regex"
fi
}
helpFunc() { helpFunc() {
if [[ "${listType}" == "whitelist" ]]; then local listname param
param="w"
type="whitelist" listname="$(GetListnameFromTypeId "${typeId}")"
elif [[ "${listType}" == "regex" && "${wildcard}" == true ]]; then param="$(GetListParamFromTypeId)"
param="-wild"
type="wildcard blacklist"
elif [[ "${listType}" == "regex" ]]; then
param="-regex"
type="regex filter"
else
param="b"
type="blacklist"
fi
echo "Usage: pihole -${param} [options] <domain> <domain2 ...> echo "Usage: pihole -${param} [options] <domain> <domain2 ...>
Example: 'pihole -${param} site.com', or 'pihole -${param} site1.com site2.com' Example: 'pihole -${param} site.com', or 'pihole -${param} site1.com site2.com'
${type^} one or more domains ${listname^} one or more domains
Options: Options:
-d, --delmode Remove domain(s) from the ${type} -d, --delmode Remove domain(s) from the ${listname}
-nr, --noreload Update ${type} without reloading the DNS server -nr, --noreload Update ${listname} without reloading the DNS server
-q, --quiet Make output less verbose -q, --quiet Make output less verbose
-h, --help Show this help dialog -h, --help Show this help dialog
-l, --list Display all your ${type}listed domains -l, --list Display all your ${listname}listed domains
--nuke Removes all entries in a list" --nuke Removes all entries in a list"
exit 0 exit 0
} }
EscapeRegexp() { ValidateDomain() {
# This way we may safely insert an arbitrary
# string in our regular expressions
# This sed is intentionally executed in three steps to ease maintainability
# The first sed removes any amount of leading dots
echo $* | sed 's/^\.*//' | sed "s/[]\.|$(){}?+*^]/\\\\&/g" | sed "s/\\//\\\\\//g"
}
HandleOther() {
# Convert to lowercase # Convert to lowercase
domain="${1,,}" domain="${1,,}"
# Check validity of domain (don't check for regex entries) # Check validity of domain (don't check for regex entries)
if [[ "${#domain}" -le 253 ]]; then if [[ "${#domain}" -le 253 ]]; then
if [[ "${listType}" == "regex" && "${wildcard}" == false ]]; then if [[ ( "${typeId}" == "${regex_blacklist}" || "${typeId}" == "${regex_whitelist}" ) && "${wildcard}" == false ]]; then
validDomain="${domain}" validDomain="${domain}"
else else
validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check
@@ -81,21 +100,15 @@ HandleOther() {
fi fi
if [[ -n "${validDomain}" ]]; then if [[ -n "${validDomain}" ]]; then
domList=("${domList[@]}" ${validDomain}) domList=("${domList[@]}" "${validDomain}")
else else
echo -e " ${CROSS} ${domain} is not a valid argument or domain name!" echo -e " ${CROSS} ${domain} is not a valid argument or domain name!"
fi fi
domaincount=$((domaincount+1))
} }
ProcessDomainList() { ProcessDomainList() {
if [[ "${listType}" == "regex" ]]; then
# Regex filter list
listname="regex filters"
else
# Whitelist / Blacklist
listname="${listType}"
fi
for dom in "${domList[@]}"; do for dom in "${domList[@]}"; do
# Format domain into regex filter if requested # Format domain into regex filter if requested
if [[ "${wildcard}" == true ]]; then if [[ "${wildcard}" == true ]]; then
@@ -105,77 +118,87 @@ ProcessDomainList() {
# Logic: If addmode then add to desired list and remove from the other; # Logic: If addmode then add to desired list and remove from the other;
# if delmode then remove from desired list but do not add to the other # if delmode then remove from desired list but do not add to the other
if ${addmode}; then if ${addmode}; then
AddDomain "${dom}" "${listType}" AddDomain "${dom}"
if [[ ! "${listType}" == "regex" ]]; then
RemoveDomain "${dom}" "${listAlt}"
fi
else else
RemoveDomain "${dom}" "${listType}" RemoveDomain "${dom}"
fi fi
done done
} }
AddDomain() { AddDomain() {
local domain list num local domain num requestedListname existingTypeId existingListname
# Use printf to escape domain. %q prints the argument in a form that can be reused as shell input
domain="$1" domain="$1"
list="$2"
# Is the domain in the list we want to add it to? # Is the domain in the list we want to add it to?
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${list} WHERE domain = '${domain}';")" num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")"
requestedListname="$(GetListnameFromTypeId "${typeId}")"
if [[ "${num}" -ne 0 ]]; then if [[ "${num}" -ne 0 ]]; then
if [[ "${verbose}" == true ]]; then existingTypeId="$(sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")"
echo -e " ${INFO} ${1} already exists in ${listname}, no need to add!" if [[ "${existingTypeId}" == "${typeId}" ]]; then
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${requestedListname}, no need to add!"
fi
else
existingListname="$(GetListnameFromTypeId "${existingTypeId}")"
sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${existingListname}, it has been moved to ${requestedListname}!"
fi
fi fi
return return
fi fi
# Domain not found in the table, add it! # Domain not found in the table, add it!
if [[ "${verbose}" == true ]]; then if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} Adding ${1} to the ${listname}..." echo -e " ${INFO} Adding ${domain} to the ${requestedListname}..."
fi fi
reload=true reload=true
# Insert only the domain here. The enabled and date_added fields will be filled # Insert only the domain here. The enabled and date_added fields will be filled
# with their default values (enabled = true, date_added = current timestamp) # with their default values (enabled = true, date_added = current timestamp)
sqlite3 "${gravityDBfile}" "INSERT INTO ${list} (domain) VALUES ('${domain}');" if [[ -z "${comment}" ]]; then
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});"
else
# also add comment when variable has been set through the "--comment" option
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');"
fi
} }
RemoveDomain() { RemoveDomain() {
local domain list num local domain num requestedListname
# Use printf to escape domain. %q prints the argument in a form that can be reused as shell input
domain="$1" domain="$1"
list="$2"
# Is the domain in the list we want to remove it from? # Is the domain in the list we want to remove it from?
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${list} WHERE domain = '${domain}';")" num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")"
requestedListname="$(GetListnameFromTypeId "${typeId}")"
if [[ "${num}" -eq 0 ]]; then if [[ "${num}" -eq 0 ]]; then
if [[ "${verbose}" == true ]]; then if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} does not exist in ${list}, no need to remove!" echo -e " ${INFO} ${domain} does not exist in ${requestedListname}, no need to remove!"
fi fi
return return
fi fi
# Domain found in the table, remove it! # Domain found in the table, remove it!
if [[ "${verbose}" == true ]]; then if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} Removing ${1} from the ${listname}..." echo -e " ${INFO} Removing ${domain} from the ${requestedListname}..."
fi fi
reload=true reload=true
# Remove it from the current list # Remove it from the current list
sqlite3 "${gravityDBfile}" "DELETE FROM ${list} WHERE domain = '${domain}';" sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};"
} }
Displaylist() { Displaylist() {
local list listname count num_pipes domain enabled status nicedate local count num_pipes domain enabled status nicedate requestedListname
listname="${listType}" requestedListname="$(GetListnameFromTypeId "${typeId}")"
data="$(sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM ${listType};" 2> /dev/null)" data="$(sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)"
if [[ -z $data ]]; then if [[ -z $data ]]; then
echo -e "Not showing empty ${listname}" echo -e "Not showing empty list"
else else
echo -e "Displaying ${listname}:" echo -e "Displaying ${requestedListname}:"
count=1 count=1
while IFS= read -r line while IFS= read -r line
do do
@@ -208,15 +231,25 @@ Displaylist() {
} }
NukeList() { NukeList() {
sqlite3 "${gravityDBfile}" "DELETE FROM ${listType};" sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};"
} }
for var in "$@"; do GetComment() {
case "${var}" in comment="$1"
"-w" | "whitelist" ) listType="whitelist"; listAlt="blacklist";; if [[ "${comment}" =~ [^a-zA-Z0-9_\#:/\.,\ -] ]]; then
"-b" | "blacklist" ) listType="blacklist"; listAlt="whitelist";; echo " ${CROSS} Found invalid characters in domain comment!"
"--wild" | "wildcard" ) listType="regex"; wildcard=true;; exit
"--regex" | "regex" ) listType="regex";; fi
}
while (( "$#" )); do
case "${1}" in
"-w" | "whitelist" ) typeId=0;;
"-b" | "blacklist" ) typeId=1;;
"--white-regex" | "white-regex" ) typeId=2;;
"--white-wild" | "white-wild" ) typeId=2; wildcard=true;;
"--wild" | "wildcard" ) typeId=3; wildcard=true;;
"--regex" | "regex" ) typeId=3;;
"-nr"| "--noreload" ) reload=false;; "-nr"| "--noreload" ) reload=false;;
"-d" | "--delmode" ) addmode=false;; "-d" | "--delmode" ) addmode=false;;
"-q" | "--quiet" ) verbose=false;; "-q" | "--quiet" ) verbose=false;;
@@ -224,13 +257,15 @@ for var in "$@"; do
"-l" | "--list" ) Displaylist;; "-l" | "--list" ) Displaylist;;
"--nuke" ) NukeList;; "--nuke" ) NukeList;;
"--web" ) web=true;; "--web" ) web=true;;
* ) HandleOther "${var}";; "--comment" ) GetComment "${2}"; shift;;
* ) ValidateDomain "${1}";;
esac esac
shift
done done
shift shift
if [[ $# = 0 ]]; then if [[ ${domaincount} == 0 ]]; then
helpFunc helpFunc
fi fi
@@ -242,5 +277,5 @@ echo "DONE"
fi fi
if [[ "${reload}" != false ]]; then if [[ "${reload}" != false ]]; then
pihole restartdns reload pihole restartdns reload-lists
fi fi

View File

@@ -36,13 +36,6 @@ flushARP(){
echo -ne " ${INFO} Flushing network table ..." echo -ne " ${INFO} Flushing network table ..."
fi fi
# Flush ARP cache to avoid re-adding of dead entries
if ! output=$(ip neigh flush all 2>&1); then
echo -e "${OVER} ${CROSS} Failed to clear ARP cache"
echo " Output: ${output}"
return 1
fi
# Truncate network_addresses table in pihole-FTL.db # Truncate network_addresses table in pihole-FTL.db
# This needs to be done before we can truncate the network table due to # This needs to be done before we can truncate the network table due to
# foreign key contraints # foreign key contraints

View File

@@ -46,6 +46,12 @@ checkout() {
local corebranches local corebranches
local webbranches local webbranches
# Check if FTL is installed - do this early on as FTL is a hard dependency for Pi-hole
local funcOutput
funcOutput=$(get_binary_name) #Store output of get_binary_name here
local binary
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
# Avoid globbing # Avoid globbing
set -f set -f
@@ -86,7 +92,6 @@ checkout() {
fi fi
#echo -e " ${TICK} Pi-hole Core" #echo -e " ${TICK} Pi-hole Core"
get_binary_name
local path local path
path="development/${binary}" path="development/${binary}"
echo "development" > /etc/pihole/ftlbranch echo "development" > /etc/pihole/ftlbranch
@@ -101,7 +106,6 @@ checkout() {
fetch_checkout_pull_branch "${webInterfaceDir}" "master" || { echo " ${CROSS} Unable to pull Web master branch"; exit 1; } fetch_checkout_pull_branch "${webInterfaceDir}" "master" || { echo " ${CROSS} Unable to pull Web master branch"; exit 1; }
fi fi
#echo -e " ${TICK} Web Interface" #echo -e " ${TICK} Web Interface"
get_binary_name
local path local path
path="master/${binary}" path="master/${binary}"
echo "master" > /etc/pihole/ftlbranch echo "master" > /etc/pihole/ftlbranch
@@ -161,7 +165,6 @@ checkout() {
fi fi
checkout_pull_branch "${webInterfaceDir}" "${2}" checkout_pull_branch "${webInterfaceDir}" "${2}"
elif [[ "${1}" == "ftl" ]] ; then elif [[ "${1}" == "ftl" ]] ; then
get_binary_name
local path local path
path="${2}/${binary}" path="${2}/${binary}"

View File

@@ -94,7 +94,35 @@ PIHOLE_RAW_BLOCKLIST_FILES="${PIHOLE_DIRECTORY}/list.*"
PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list" PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list"
PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate" PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate"
PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf" PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf"
PIHOLE_GRAVITY_DB_FILE="${PIHOLE_DIRECTORY}/gravity.db" PIHOLE_FTL_CONF_FILE="${PIHOLE_DIRECTORY}/pihole-FTL.conf"
# Read the value of an FTL config key. The value is printed to stdout.
#
# Args:
# 1. The key to read
# 2. The default if the setting or config does not exist
get_ftl_conf_value() {
local key=$1
local default=$2
local value
# Obtain key=... setting from pihole-FTL.conf
if [[ -e "$PIHOLE_FTL_CONF_FILE" ]]; then
# Constructed to return nothing when
# a) the setting is not present in the config file, or
# b) the setting is commented out (e.g. "#DBFILE=...")
value="$(sed -n -e "s/^\\s*$key=\\s*//p" ${PIHOLE_FTL_CONF_FILE})"
fi
# Test for missing value. Use default value in this case.
if [[ -z "$value" ]]; then
value="$default"
fi
echo "$value"
}
PIHOLE_GRAVITY_DB_FILE="$(get_ftl_conf_value "GRAVITYDB" "${PIHOLE_DIRECTORY}/gravity.db")"
PIHOLE_COMMAND="${BIN_DIRECTORY}/pihole" PIHOLE_COMMAND="${BIN_DIRECTORY}/pihole"
PIHOLE_COLTABLE_FILE="${BIN_DIRECTORY}/COL_TABLE" PIHOLE_COLTABLE_FILE="${BIN_DIRECTORY}/COL_TABLE"
@@ -105,7 +133,7 @@ FTL_PORT="${RUN_DIRECTORY}/pihole-FTL.port"
PIHOLE_LOG="${LOG_DIRECTORY}/pihole.log" PIHOLE_LOG="${LOG_DIRECTORY}/pihole.log"
PIHOLE_LOG_GZIPS="${LOG_DIRECTORY}/pihole.log.[0-9].*" PIHOLE_LOG_GZIPS="${LOG_DIRECTORY}/pihole.log.[0-9].*"
PIHOLE_DEBUG_LOG="${LOG_DIRECTORY}/pihole_debug.log" PIHOLE_DEBUG_LOG="${LOG_DIRECTORY}/pihole_debug.log"
PIHOLE_FTL_LOG="${LOG_DIRECTORY}/pihole-FTL.log" PIHOLE_FTL_LOG="$(get_ftl_conf_value "LOGFILE" "${LOG_DIRECTORY}/pihole-FTL.log")"
PIHOLE_WEB_SERVER_ACCESS_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/access.log" PIHOLE_WEB_SERVER_ACCESS_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/access.log"
PIHOLE_WEB_SERVER_ERROR_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/error.log" PIHOLE_WEB_SERVER_ERROR_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/error.log"
@@ -634,19 +662,21 @@ ping_internet() {
} }
compare_port_to_service_assigned() { compare_port_to_service_assigned() {
local service_name="${1}" local service_name
# The programs we use may change at some point, so they are in a varible here local expected_service
local resolver="pihole-FTL" local port
local web_server="lighttpd"
local ftl="pihole-FTL" service_name="${2}"
expected_service="${1}"
port="${3}"
# If the service is a Pi-hole service, highlight it in green # If the service is a Pi-hole service, highlight it in green
if [[ "${service_name}" == "${resolver}" ]] || [[ "${service_name}" == "${web_server}" ]] || [[ "${service_name}" == "${ftl}" ]]; then if [[ "${service_name}" == "${expected_service}" ]]; then
log_write "[${COL_GREEN}${port_number}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}" log_write "[${COL_GREEN}${port}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}"
# Otherwise, # Otherwise,
else else
# Show the service name in red since it's non-standard # Show the service name in red since it's non-standard
log_write "[${COL_RED}${port_number}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})" log_write "[${COL_RED}${port}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
fi fi
} }
@@ -680,11 +710,11 @@ check_required_ports() {
fi fi
# Use a case statement to determine if the right services are using the right ports # Use a case statement to determine if the right services are using the right ports
case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in
53) compare_port_to_service_assigned "${resolver}" 53) compare_port_to_service_assigned "${resolver}" "${service_name}" 53
;; ;;
80) compare_port_to_service_assigned "${web_server}" 80) compare_port_to_service_assigned "${web_server}" "${service_name}" 80
;; ;;
4711) compare_port_to_service_assigned "${ftl}" 4711) compare_port_to_service_assigned "${ftl}" "${service_name}" 4711
;; ;;
# If it's not a default port that Pi-hole needs, just print it out for the user to see # If it's not a default port that Pi-hole needs, just print it out for the user to see
*) log_write "${port_number} ${service_name} (${protocol_type})"; *) log_write "${port_number} ${service_name} (${protocol_type})";
@@ -1076,20 +1106,20 @@ show_db_entries() {
IFS="$OLD_IFS" IFS="$OLD_IFS"
} }
show_groups() {
show_db_entries "Groups" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,name,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,description FROM \"group\"" "4 7 50 19 19 50"
}
show_adlists() { show_adlists() {
show_db_entries "Adlists" "SELECT * FROM adlist" "4 100 7 10 13 50" show_db_entries "Adlists" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(adlist_by_group.group_id) group_ids,address,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM adlist LEFT JOIN adlist_by_group ON adlist.id = adlist_by_group.adlist_id GROUP BY id;" "4 7 12 100 19 19 50"
} }
show_whitelist() { show_domainlist() {
show_db_entries "Whitelist" "SELECT * FROM whitelist" "4 100 7 10 13 50" show_db_entries "Domainlist (0/1 = exact white-/blacklist, 2/3 = regex white-/blacklist)" "SELECT id,CASE type WHEN '0' THEN '0 ' WHEN '1' THEN ' 1 ' WHEN '2' THEN ' 2 ' WHEN '3' THEN ' 3' ELSE type END type,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(domainlist_by_group.group_id) group_ids,domain,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM domainlist LEFT JOIN domainlist_by_group ON domainlist.id = domainlist_by_group.domainlist_id GROUP BY id;" "4 4 7 12 100 19 19 50"
} }
show_blacklist() { show_clients() {
show_db_entries "Blacklist" "SELECT * FROM blacklist" "4 100 7 10 13 50" show_db_entries "Clients" "SELECT id,GROUP_CONCAT(client_by_group.group_id) group_ids,ip,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM client LEFT JOIN client_by_group ON client.id = client_by_group.client_id GROUP BY id;" "4 12 100 19 19 50"
}
show_regexlist() {
show_db_entries "Regexlist" "SELECT * FROM regex" "4 100 7 10 13 50"
} }
analyze_gravity_list() { analyze_gravity_list() {
@@ -1099,16 +1129,17 @@ analyze_gravity_list() {
gravity_permissions=$(ls -ld "${PIHOLE_GRAVITY_DB_FILE}") gravity_permissions=$(ls -ld "${PIHOLE_GRAVITY_DB_FILE}")
log_write "${COL_GREEN}${gravity_permissions}${COL_NC}" log_write "${COL_GREEN}${gravity_permissions}${COL_NC}"
local gravity_size show_db_entries "Info table" "SELECT property,value FROM info" "20 40"
gravity_size=$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT COUNT(*) FROM vw_gravity") gravity_updated_raw="$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")"
log_write " Size (excluding blacklist): ${COL_CYAN}${gravity_size}${COL_NC} entries" gravity_updated="$(date -d @"${gravity_updated_raw}")"
log_write " Last gravity run finished at: ${COL_CYAN}${gravity_updated}${COL_NC}"
log_write "" log_write ""
OLD_IFS="$IFS" OLD_IFS="$IFS"
IFS=$'\r\n' IFS=$'\r\n'
local gravity_sample=() local gravity_sample=()
mapfile -t gravity_sample < <(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10") mapfile -t gravity_sample < <(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10")
log_write " ${COL_CYAN}----- First 10 Domains -----${COL_NC}" log_write " ${COL_CYAN}----- First 10 Gravity Domains -----${COL_NC}"
for line in "${gravity_sample[@]}"; do for line in "${gravity_sample[@]}"; do
log_write " ${line}" log_write " ${line}"
@@ -1265,10 +1296,10 @@ process_status
parse_setup_vars parse_setup_vars
check_x_headers check_x_headers
analyze_gravity_list analyze_gravity_list
show_groups
show_domainlist
show_clients
show_adlists show_adlists
show_whitelist
show_blacklist
show_regexlist
show_content_of_pihole_files show_content_of_pihole_files
parse_locale parse_locale
analyze_pihole_log analyze_pihole_log

View File

@@ -13,7 +13,6 @@
piholeDir="/etc/pihole" piholeDir="/etc/pihole"
gravityDBfile="${piholeDir}/gravity.db" gravityDBfile="${piholeDir}/gravity.db"
options="$*" options="$*"
adlist=""
all="" all=""
exact="" exact=""
blockpage="" blockpage=""
@@ -34,16 +33,18 @@ scanList(){
export LC_CTYPE=C export LC_CTYPE=C
# /dev/null forces filename to be printed when only one list has been generated # /dev/null forces filename to be printed when only one list has been generated
# shellcheck disable=SC2086
case "${type}" in case "${type}" in
"exact" ) grep -i -E -l "(^|(?<!#)\\s)${esc_domain}($|\\s|#)" ${lists} /dev/null 2>/dev/null;; "exact" ) grep -i -E -l "(^|(?<!#)\\s)${esc_domain}($|\\s|#)" ${lists} /dev/null 2>/dev/null;;
# Create array of regexps # Iterate through each regexp and check whether it matches the domainQuery
# Iterate through each regexp and check whether it matches the domainQuery # If it does, print the matching regexp and continue looping
# If it does, print the matching regexp and continue looping # Input 1 - regexps | Input 2 - domainQuery
# Input 1 - regexps | Input 2 - domainQuery "regex" )
"regex" ) awk 'NR==FNR{regexps[$0];next}{for (r in regexps)if($0 ~ r)print r}' \ for list in ${lists}; do
<(echo "${lists}") <(echo "${domain}") 2>/dev/null;; if [[ "${domain}" =~ ${list} ]]; then
* ) grep -i "${esc_domain}" ${lists} /dev/null 2>/dev/null;; printf "%b\n" "${list}";
fi
done;;
* ) grep -i "${esc_domain}" ${lists} /dev/null 2>/dev/null;;
esac esac
} }
@@ -53,7 +54,6 @@ Example: 'pihole -q -exact domain.com'
Query the adlists for a specified domain Query the adlists for a specified domain
Options: Options:
-adlist Print the name of the block list URL
-exact Search the block lists for exact domain matches -exact Search the block lists for exact domain matches
-all Return all query matches within a block list -all Return all query matches within a block list
-h, --help Show this help dialog" -h, --help Show this help dialog"
@@ -64,7 +64,6 @@ fi
if [[ "${options}" == *"-bp"* ]]; then if [[ "${options}" == *"-bp"* ]]; then
exact="exact"; blockpage=true exact="exact"; blockpage=true
else else
[[ "${options}" == *"-adlist"* ]] && adlist=true
[[ "${options}" == *"-all"* ]] && all=true [[ "${options}" == *"-all"* ]] && all=true
if [[ "${options}" == *"-exact"* ]]; then if [[ "${options}" == *"-exact"* ]]; then
exact="exact"; matchType="exact ${matchType}" exact="exact"; matchType="exact ${matchType}"
@@ -90,7 +89,7 @@ if [[ -n "${str:-}" ]]; then
fi fi
scanDatabaseTable() { scanDatabaseTable() {
local domain table type querystr result local domain table type querystr result extra
domain="$(printf "%q" "${1}")" domain="$(printf "%q" "${1}")"
table="${2}" table="${2}"
type="${3:-}" type="${3:-}"
@@ -99,10 +98,17 @@ scanDatabaseTable() {
# Underscores are SQLite wildcards matching exactly one character. We obviously want to suppress this # Underscores are SQLite wildcards matching exactly one character. We obviously want to suppress this
# behavior. The "ESCAPE '\'" clause specifies that an underscore preceded by an '\' should be matched # behavior. The "ESCAPE '\'" clause specifies that an underscore preceded by an '\' should be matched
# as a literal underscore character. We pretreat the $domain variable accordingly to escape underscores. # as a literal underscore character. We pretreat the $domain variable accordingly to escape underscores.
case "${type}" in if [[ "${table}" == "gravity" ]]; then
"exact" ) querystr="SELECT domain FROM vw_${table} WHERE domain = '${domain}'";; case "${exact}" in
* ) querystr="SELECT domain FROM vw_${table} WHERE domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";; "exact" ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain = '${domain}'";;
esac * ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
esac
else
case "${exact}" in
"exact" ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain = '${domain}'";;
* ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
esac
fi
# Send prepared query to gravity database # Send prepared query to gravity database
result="$(sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null result="$(sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null
@@ -111,11 +117,18 @@ scanDatabaseTable() {
return return
fi fi
if [[ "${table}" == "gravity" ]]; then
echo "${result}"
return
fi
# Mark domain as having been white-/blacklist matched (global variable) # Mark domain as having been white-/blacklist matched (global variable)
wbMatch=true wbMatch=true
# Print table name # Print table name
echo " ${matchType^} found in ${COL_BOLD}${table^}${COL_NC}" if [[ -z "${blockpage}" ]]; then
echo " ${matchType^} found in ${COL_BOLD}exact ${table}${COL_NC}"
fi
# Loop over results and print them # Loop over results and print them
mapfile -t results <<< "${result}" mapfile -t results <<< "${result}"
@@ -124,52 +137,66 @@ scanDatabaseTable() {
echo "π ${result}" echo "π ${result}"
exit 0 exit 0
fi fi
echo " ${result}" domain="${result/|*}"
if [[ "${result#*|}" == "0" ]]; then
extra=" (disabled)"
else
extra=""
fi
echo " ${domain}${extra}"
done done
} }
scanRegexDatabaseTable() {
local domain list
domain="${1}"
list="${2}"
type="${3:-}"
# Query all regex from the corresponding database tables
mapfile -t regexList < <(sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null)
# If we have regexps to process
if [[ "${#regexList[@]}" -ne 0 ]]; then
# Split regexps over a new line
str_regexList=$(printf '%s\n' "${regexList[@]}")
# Check domain against regexps
mapfile -t regexMatches < <(scanList "${domain}" "${str_regexList}" "regex")
# If there were regex matches
if [[ "${#regexMatches[@]}" -ne 0 ]]; then
# Split matching regexps over a new line
str_regexMatches=$(printf '%s\n' "${regexMatches[@]}")
# Form a "matched" message
str_message="${matchType^} found in ${COL_BOLD}regex ${list}${COL_NC}"
# Form a "results" message
str_result="${COL_BOLD}${str_regexMatches}${COL_NC}"
# If we are displaying more than just the source of the block
if [[ -z "${blockpage}" ]]; then
# Set the wildcard match flag
wcMatch=true
# Echo the "matched" message, indented by one space
echo " ${str_message}"
# Echo the "results" message, each line indented by three spaces
# shellcheck disable=SC2001
echo "${str_result}" | sed 's/^/ /'
else
echo "π .wildcard"
exit 0
fi
fi
fi
}
# Scan Whitelist and Blacklist # Scan Whitelist and Blacklist
scanDatabaseTable "${domainQuery}" "whitelist" "${exact}" scanDatabaseTable "${domainQuery}" "whitelist" "0"
scanDatabaseTable "${domainQuery}" "blacklist" "${exact}" scanDatabaseTable "${domainQuery}" "blacklist" "1"
# Scan Regex table # Scan Regex table
mapfile -t regexList < <(sqlite3 "${gravityDBfile}" "SELECT domain FROM vw_regex" 2> /dev/null) scanRegexDatabaseTable "${domainQuery}" "whitelist" "2"
scanRegexDatabaseTable "${domainQuery}" "blacklist" "3"
# If we have regexps to process # Query block lists
if [[ "${#regexList[@]}" -ne 0 ]]; then mapfile -t results <<< "$(scanDatabaseTable "${domainQuery}" "gravity")"
# Split regexps over a new line
str_regexList=$(printf '%s\n' "${regexList[@]}")
# Check domainQuery against regexps
mapfile -t regexMatches < <(scanList "${domainQuery}" "${str_regexList}" "regex")
# If there were regex matches
if [[ "${#regexMatches[@]}" -ne 0 ]]; then
# Split matching regexps over a new line
str_regexMatches=$(printf '%s\n' "${regexMatches[@]}")
# Form a "matched" message
str_message="${matchType^} found in ${COL_BOLD}Regex list${COL_NC}"
# Form a "results" message
str_result="${COL_BOLD}${str_regexMatches}${COL_NC}"
# If we are displaying more than just the source of the block
if [[ -z "${blockpage}" ]]; then
# Set the wildcard match flag
wcMatch=true
# Echo the "matched" message, indented by one space
echo " ${str_message}"
# Echo the "results" message, each line indented by three spaces
# shellcheck disable=SC2001
echo "${str_result}" | sed 's/^/ /'
else
echo "π Regex list"
exit 0
fi
fi
fi
# Get version sorted *.domains filenames (without dir path)
lists=("$(cd "$piholeDir" || exit 0; printf "%s\\n" -- *.domains | sort -V)")
# Query blocklists for occurences of domain
mapfile -t results <<< "$(scanList "${domainQuery}" "${lists[*]}" "${exact}")"
# Handle notices # Handle notices
if [[ -z "${wbMatch:-}" ]] && [[ -z "${wcMatch:-}" ]] && [[ -z "${results[*]}" ]]; then if [[ -z "${wbMatch:-}" ]] && [[ -z "${wcMatch:-}" ]] && [[ -z "${results[*]}" ]]; then
@@ -184,26 +211,6 @@ elif [[ -z "${all}" ]] && [[ "${#results[*]}" -ge 100 ]]; then
exit 0 exit 0
fi fi
# Remove unwanted content from non-exact $results
if [[ -z "${exact}" ]]; then
# Delete lines starting with #
# Remove comments after domain
# Remove hosts format IP address
mapfile -t results <<< "$(IFS=$'\n'; sed \
-e "/:#/d" \
-e "s/[ \\t]#.*//g" \
-e "s/:.*[ \\t]/:/g" \
<<< "${results[*]}")"
# Exit if result was in a comment
[[ -z "${results[*]}" ]] && exit 0
fi
# Get adlist file content as array
if [[ -n "${adlist}" ]] || [[ -n "${blockpage}" ]]; then
# Retrieve source URLs from gravity database
mapfile -t adlists <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
fi
# Print "Exact matches for" title # Print "Exact matches for" title
if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then
plural=""; [[ "${#results[*]}" -gt 1 ]] && plural="es" plural=""; [[ "${#results[*]}" -gt 1 ]] && plural="es"
@@ -211,28 +218,25 @@ if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then
fi fi
for result in "${results[@]}"; do for result in "${results[@]}"; do
fileName="${result/:*/}" match="${result/|*/}"
extra="${result#*|}"
# Determine *.domains URL using filename's number adlistAddress="${extra/|*/}"
if [[ -n "${adlist}" ]] || [[ -n "${blockpage}" ]]; then extra="${extra#*|}"
fileNum="${fileName/list./}"; fileNum="${fileNum%%.*}" if [[ "${extra}" == "0" ]]; then
fileName="${adlists[$fileNum]}" extra="(disabled)"
else
# Discrepency occurs when adlists has been modified, but Gravity has not been run extra=""
if [[ -z "${fileName}" ]]; then
fileName="${COL_LIGHT_RED}(no associated adlists URL found)${COL_NC}"
fi
fi fi
if [[ -n "${blockpage}" ]]; then if [[ -n "${blockpage}" ]]; then
echo "${fileNum} ${fileName}" echo "0 ${adlistAddress}"
elif [[ -n "${exact}" ]]; then elif [[ -n "${exact}" ]]; then
echo " ${fileName}" echo " - ${adlistAddress} ${extra}"
else else
if [[ ! "${fileName}" == "${fileName_prev:-}" ]]; then if [[ ! "${adlistAddress}" == "${adlistAddress_prev:-}" ]]; then
count="" count=""
echo " ${matchType^} found in ${COL_BOLD}${fileName}${COL_NC}:" echo " ${matchType^} found in ${COL_BOLD}${adlistAddress}${COL_NC}:"
fileName_prev="${fileName}" adlistAddress_prev="${adlistAddress}"
fi fi
: $((count++)) : $((count++))
@@ -242,7 +246,7 @@ for result in "${results[@]}"; do
[[ "${count}" -gt "${max_count}" ]] && continue [[ "${count}" -gt "${max_count}" ]] && continue
echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}" echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}"
else else
echo " ${result#*:}" echo " ${match} ${extra}"
fi fi
fi fi
done done

View File

@@ -31,7 +31,6 @@ source "/opt/pihole/COL_TABLE"
# make_repo() sourced from basic-install.sh # make_repo() sourced from basic-install.sh
# update_repo() source from basic-install.sh # update_repo() source from basic-install.sh
# getGitFiles() sourced from basic-install.sh # getGitFiles() sourced from basic-install.sh
# get_binary_name() sourced from basic-install.sh
# FTLcheckUpdate() sourced from basic-install.sh # FTLcheckUpdate() sourced from basic-install.sh
GitCheckUpdateAvail() { GitCheckUpdateAvail() {
@@ -129,7 +128,12 @@ main() {
fi fi
fi fi
if FTLcheckUpdate > /dev/null; then local funcOutput
funcOutput=$(get_binary_name) #Store output of get_binary_name here
local binary
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
if FTLcheckUpdate "${binary}" > /dev/null; then
FTL_update=true FTL_update=true
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}" echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}"
else else
@@ -194,6 +198,14 @@ main() {
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \ ${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \
echo -e "${basicError}" && exit 1 echo -e "${basicError}" && exit 1
fi fi
if [[ "${FTL_update}" == true || "${core_update}" == true || "${web_update}" == true ]]; then
# Force an update of the updatechecker
/opt/pihole/updatecheck.sh
/opt/pihole/updatecheck.sh x remote
echo -e " ${INFO} Local version file information updated."
fi
echo "" echo ""
exit 0 exit 0
} }

View File

@@ -84,6 +84,21 @@ getRemoteVersion(){
# Get the version from the remote origin # Get the version from the remote origin
local daemon="${1}" local daemon="${1}"
local version local version
local cachedVersions
local arrCache
cachedVersions="/etc/pihole/GitHubVersions"
#If the above file exists, then we can read from that. Prevents overuse of Github API
if [[ -f "$cachedVersions" ]]; then
IFS=' ' read -r -a arrCache < "$cachedVersions"
case $daemon in
"pi-hole" ) echo "${arrCache[0]}";;
"AdminLTE" ) echo "${arrCache[1]}";;
"FTL" ) echo "${arrCache[2]}";;
esac
return 0
fi
version=$(curl --silent --fail "https://api.github.com/repos/pi-hole/${daemon}/releases/latest" | \ version=$(curl --silent --fail "https://api.github.com/repos/pi-hole/${daemon}/releases/latest" | \
awk -F: '$1 ~/tag_name/ { print $2 }' | \ awk -F: '$1 ~/tag_name/ { print $2 }' | \
@@ -97,22 +112,48 @@ getRemoteVersion(){
return 0 return 0
} }
getLocalBranch(){
# Get the checked out branch of the local directory
local directory="${1}"
local branch
# Local FTL btranch is stored in /etc/pihole/ftlbranch
if [[ "$1" == "FTL" ]]; then
branch="$(pihole-FTL branch)"
else
cd "${directory}" 2> /dev/null || { echo "${DEFAULT}"; return 1; }
branch=$(git rev-parse --abbrev-ref HEAD || echo "$DEFAULT")
fi
if [[ ! "${branch}" =~ ^v ]]; then
if [[ "${branch}" == "master" ]]; then
echo ""
elif [[ "${branch}" == "HEAD" ]]; then
echo "in detached HEAD state at "
else
echo "${branch} "
fi
else
# Branch started in "v"
echo "release "
fi
return 0
}
versionOutput() { versionOutput() {
[[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR [[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR
[[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR [[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR
[[ "$1" == "FTL" ]] && GITDIR="FTL" [[ "$1" == "FTL" ]] && GITDIR="FTL"
[[ "$2" == "-c" ]] || [[ "$2" == "--current" ]] || [[ -z "$2" ]] && current=$(getLocalVersion $GITDIR) [[ "$2" == "-c" ]] || [[ "$2" == "--current" ]] || [[ -z "$2" ]] && current=$(getLocalVersion $GITDIR) && branch=$(getLocalBranch $GITDIR)
[[ "$2" == "-l" ]] || [[ "$2" == "--latest" ]] || [[ -z "$2" ]] && latest=$(getRemoteVersion "$1") [[ "$2" == "-l" ]] || [[ "$2" == "--latest" ]] || [[ -z "$2" ]] && latest=$(getRemoteVersion "$1")
if [[ "$2" == "-h" ]] || [[ "$2" == "--hash" ]]; then if [[ "$2" == "-h" ]] || [[ "$2" == "--hash" ]]; then
[[ "$3" == "-c" ]] || [[ "$3" == "--current" ]] || [[ -z "$3" ]] && curHash=$(getLocalHash "$GITDIR") [[ "$3" == "-c" ]] || [[ "$3" == "--current" ]] || [[ -z "$3" ]] && curHash=$(getLocalHash "$GITDIR") && branch=$(getLocalBranch $GITDIR)
[[ "$3" == "-l" ]] || [[ "$3" == "--latest" ]] || [[ -z "$3" ]] && latHash=$(getRemoteHash "$1" "$(cd "$GITDIR" 2> /dev/null && git rev-parse --abbrev-ref HEAD)") [[ "$3" == "-l" ]] || [[ "$3" == "--latest" ]] || [[ -z "$3" ]] && latHash=$(getRemoteHash "$1" "$(cd "$GITDIR" 2> /dev/null && git rev-parse --abbrev-ref HEAD)")
fi fi
if [[ -n "$current" ]] && [[ -n "$latest" ]]; then if [[ -n "$current" ]] && [[ -n "$latest" ]]; then
output="${1^} version is $current (Latest: $latest)" output="${1^} version is $branch$current (Latest: $latest)"
elif [[ -n "$current" ]] && [[ -z "$latest" ]]; then elif [[ -n "$current" ]] && [[ -z "$latest" ]]; then
output="Current ${1^} version is $current" output="Current ${1^} version is $branch$current."
elif [[ -z "$current" ]] && [[ -n "$latest" ]]; then elif [[ -z "$current" ]] && [[ -n "$latest" ]]; then
output="Latest ${1^} version is $latest" output="Latest ${1^} version is $latest"
elif [[ "$curHash" == "N/A" ]] || [[ "$latHash" == "N/A" ]]; then elif [[ "$curHash" == "N/A" ]] || [[ "$latHash" == "N/A" ]]; then

View File

@@ -16,6 +16,8 @@ readonly dhcpconfig="/etc/dnsmasq.d/02-pihole-dhcp.conf"
readonly FTLconf="/etc/pihole/pihole-FTL.conf" readonly FTLconf="/etc/pihole/pihole-FTL.conf"
# 03 -> wildcards # 03 -> wildcards
readonly dhcpstaticconfig="/etc/dnsmasq.d/04-pihole-static-dhcp.conf" readonly dhcpstaticconfig="/etc/dnsmasq.d/04-pihole-static-dhcp.conf"
readonly PI_HOLE_BIN_DIR="/usr/local/bin"
readonly dnscustomfile="/etc/pihole/custom.list"
readonly gravityDBfile="/etc/pihole/gravity.db" readonly gravityDBfile="/etc/pihole/gravity.db"
@@ -34,7 +36,6 @@ Options:
-c, celsius Set Celsius as preferred temperature unit -c, celsius Set Celsius as preferred temperature unit
-f, fahrenheit Set Fahrenheit as preferred temperature unit -f, fahrenheit Set Fahrenheit as preferred temperature unit
-k, kelvin Set Kelvin as preferred temperature unit -k, kelvin Set Kelvin as preferred temperature unit
-r, hostrecord Add a name to the DNS associated to an IPv4/IPv6 address
-e, email Set an administrative contact address for the Block Page -e, email Set an administrative contact address for the Block Page
-h, --help Show this help dialog -h, --help Show this help dialog
-i, interface Specify dnsmasq's interface listening behavior -i, interface Specify dnsmasq's interface listening behavior
@@ -87,9 +88,9 @@ SetTemperatureUnit() {
HashPassword() { HashPassword() {
# Compute password hash twice to avoid rainbow table vulnerability # Compute password hash twice to avoid rainbow table vulnerability
return=$(echo -n ${1} | sha256sum | sed 's/\s.*$//') return=$(echo -n "${1}" | sha256sum | sed 's/\s.*$//')
return=$(echo -n ${return} | sha256sum | sed 's/\s.*$//') return=$(echo -n "${return}" | sha256sum | sed 's/\s.*$//')
echo ${return} echo "${return}"
} }
SetWebPassword() { SetWebPassword() {
@@ -143,18 +144,18 @@ ProcessDNSSettings() {
delete_dnsmasq_setting "server" delete_dnsmasq_setting "server"
COUNTER=1 COUNTER=1
while [[ 1 ]]; do while true ; do
var=PIHOLE_DNS_${COUNTER} var=PIHOLE_DNS_${COUNTER}
if [ -z "${!var}" ]; then if [ -z "${!var}" ]; then
break; break;
fi fi
add_dnsmasq_setting "server" "${!var}" add_dnsmasq_setting "server" "${!var}"
let COUNTER=COUNTER+1 (( COUNTER++ ))
done done
# The option LOCAL_DNS_PORT is deprecated # The option LOCAL_DNS_PORT is deprecated
# We apply it once more, and then convert it into the current format # We apply it once more, and then convert it into the current format
if [ ! -z "${LOCAL_DNS_PORT}" ]; then if [ -n "${LOCAL_DNS_PORT}" ]; then
add_dnsmasq_setting "server" "127.0.0.1#${LOCAL_DNS_PORT}" add_dnsmasq_setting "server" "127.0.0.1#${LOCAL_DNS_PORT}"
add_setting "PIHOLE_DNS_${COUNTER}" "127.0.0.1#${LOCAL_DNS_PORT}" add_setting "PIHOLE_DNS_${COUNTER}" "127.0.0.1#${LOCAL_DNS_PORT}"
delete_setting "LOCAL_DNS_PORT" delete_setting "LOCAL_DNS_PORT"
@@ -177,14 +178,13 @@ ProcessDNSSettings() {
if [[ "${DNSSEC}" == true ]]; then if [[ "${DNSSEC}" == true ]]; then
echo "dnssec echo "dnssec
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D
" >> "${dnsmasqconfig}" " >> "${dnsmasqconfig}"
fi fi
delete_dnsmasq_setting "host-record" delete_dnsmasq_setting "host-record"
if [ ! -z "${HOSTRECORD}" ]; then if [ -n "${HOSTRECORD}" ]; then
add_dnsmasq_setting "host-record" "${HOSTRECORD}" add_dnsmasq_setting "host-record" "${HOSTRECORD}"
fi fi
@@ -212,6 +212,11 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_DOMAIN}/${CONDITIONAL_FORWARDING_IP}" add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_DOMAIN}/${CONDITIONAL_FORWARDING_IP}"
add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_REVERSE}/${CONDITIONAL_FORWARDING_IP}" add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_REVERSE}/${CONDITIONAL_FORWARDING_IP}"
fi fi
# Prevent Firefox from automatically switching over to DNS-over-HTTPS
# This follows https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https
# (sourced 7th September 2019)
add_dnsmasq_setting "server=/use-application-dns.net/"
} }
SetDNSServers() { SetDNSServers() {
@@ -276,7 +281,7 @@ Reboot() {
} }
RestartDNS() { RestartDNS() {
/usr/local/bin/pihole restartdns "${PI_HOLE_BIN_DIR}"/pihole restartdns
} }
SetQueryLogOptions() { SetQueryLogOptions() {
@@ -395,20 +400,38 @@ SetWebUILayout() {
change_setting "WEBUIBOXEDLAYOUT" "${args[2]}" change_setting "WEBUIBOXEDLAYOUT" "${args[2]}"
} }
CheckUrl(){
local regex
# Check for characters NOT allowed in URLs
regex="[^a-zA-Z0-9:/?&%=~._-]"
if [[ "${1}" =~ ${regex} ]]; then
return 1
else
return 0
fi
}
CustomizeAdLists() { CustomizeAdLists() {
local address local address
address="${args[3]}" address="${args[3]}"
local comment
comment="${args[4]}"
if [[ "${args[2]}" == "enable" ]]; then if CheckUrl "${address}"; then
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'" if [[ "${args[2]}" == "enable" ]]; then
elif [[ "${args[2]}" == "disable" ]]; then sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'"
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'" elif [[ "${args[2]}" == "disable" ]]; then
elif [[ "${args[2]}" == "add" ]]; then sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'"
sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address) VALUES ('${address}')" elif [[ "${args[2]}" == "add" ]]; then
elif [[ "${args[2]}" == "del" ]]; then sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')"
sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'" elif [[ "${args[2]}" == "del" ]]; then
sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'"
else
echo "Not permitted"
return 1
fi
else else
echo "Not permitted" echo "Invalid Url"
return 1 return 1
fi fi
} }
@@ -454,32 +477,6 @@ RemoveDHCPStaticAddress() {
sed -i "/dhcp-host=${mac}.*/d" "${dhcpstaticconfig}" sed -i "/dhcp-host=${mac}.*/d" "${dhcpstaticconfig}"
} }
SetHostRecord() {
if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then
echo "Usage: pihole -a hostrecord <domain> [IPv4-address],[IPv6-address]
Example: 'pihole -a hostrecord home.domain.com 192.168.1.1,2001:db8:a0b:12f0::1'
Add a name to the DNS associated to an IPv4/IPv6 address
Options:
\"\" Empty: Remove host record
-h, --help Show this help dialog"
exit 0
fi
if [[ -n "${args[3]}" ]]; then
change_setting "HOSTRECORD" "${args[2]},${args[3]}"
echo -e " ${TICK} Setting host record for ${args[2]} to ${args[3]}"
else
change_setting "HOSTRECORD" ""
echo -e " ${TICK} Removing host record"
fi
ProcessDNSSettings
# Restart dnsmasq to load new configuration
RestartDNS
}
SetAdminEmail() { SetAdminEmail() {
if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then
echo "Usage: pihole -a email <address> echo "Usage: pihole -a email <address>
@@ -493,6 +490,16 @@ Options:
fi fi
if [[ -n "${args[2]}" ]]; then if [[ -n "${args[2]}" ]]; then
# Sanitize email address in case of security issues
# Regex from https://stackoverflow.com/a/2138832/4065967
local regex
regex="^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\$"
if [[ ! "${args[2]}" =~ ${regex} ]]; then
echo -e " ${CROSS} Invalid email address"
exit 0
fi
change_setting "ADMIN_EMAIL" "${args[2]}" change_setting "ADMIN_EMAIL" "${args[2]}"
echo -e " ${TICK} Setting admin contact to ${args[2]}" echo -e " ${TICK} Setting admin contact to ${args[2]}"
else else
@@ -518,10 +525,10 @@ Interfaces:
fi fi
if [[ "${args[2]}" == "all" ]]; then if [[ "${args[2]}" == "all" ]]; then
echo -e " ${INFO} Listening on all interfaces, permiting all origins. Please use a firewall!" echo -e " ${INFO} Listening on all interfaces, permitting all origins. Please use a firewall!"
change_setting "DNSMASQ_LISTENING" "all" change_setting "DNSMASQ_LISTENING" "all"
elif [[ "${args[2]}" == "local" ]]; then elif [[ "${args[2]}" == "local" ]]; then
echo -e " ${INFO} Listening on all interfaces, permiting origins from one hop away (LAN)" echo -e " ${INFO} Listening on all interfaces, permitting origins from one hop away (LAN)"
change_setting "DNSMASQ_LISTENING" "local" change_setting "DNSMASQ_LISTENING" "local"
else else
echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}" echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}"
@@ -538,25 +545,50 @@ Interfaces:
} }
Teleporter() { Teleporter() {
local datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S") local datetimestamp
datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S")
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-teleporter_${datetimestamp}.tar.gz" php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-teleporter_${datetimestamp}.tar.gz"
} }
checkDomain()
{
local domain validDomain
# Convert to lowercase
domain="${1,,}"
validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check
validDomain=$(grep -P "^[^\\.]{1,63}(\\.[^\\.]{1,63})*$" <<< "${validDomain}") # Length of each label
echo "${validDomain}"
}
addAudit() addAudit()
{ {
shift # skip "-a" shift # skip "-a"
shift # skip "audit" shift # skip "audit"
for var in "$@" local domains validDomain
domains=""
for domain in "$@"
do do
echo "${var}" >> /etc/pihole/auditlog.list # Check domain to be added. Only continue if it is valid
validDomain="$(checkDomain "${domain}")"
if [[ -n "${validDomain}" ]]; then
# Put comma in between domains when there is
# more than one domains to be added
# SQL INSERT allows adding multiple rows at once using the format
## INSERT INTO table (domain) VALUES ('abc.de'),('fgh.ij'),('klm.no'),('pqr.st');
if [[ -n "${domains}" ]]; then
domains="${domains},"
fi
domains="${domains}('${domain}')"
fi
done done
chmod 644 /etc/pihole/auditlog.list # Insert only the domain here. The date_added field will be
# filled with its default value (date_added = current timestamp)
sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};"
} }
clearAudit() clearAudit()
{ {
echo -n "" > /etc/pihole/auditlog.list sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;"
chmod 644 /etc/pihole/auditlog.list
} }
SetPrivacyLevel() { SetPrivacyLevel() {
@@ -566,6 +598,28 @@ SetPrivacyLevel() {
fi fi
} }
AddCustomDNSAddress() {
echo -e " ${TICK} Adding custom DNS entry..."
ip="${args[2]}"
host="${args[3]}"
echo "${ip} ${host}" >> "${dnscustomfile}"
# Restart dnsmasq to load new custom DNS entries
RestartDNS
}
RemoveCustomDNSAddress() {
echo -e " ${TICK} Removing custom DNS entry..."
ip="${args[2]}"
host="${args[3]}"
sed -i "/${ip} ${host}/d" "${dnscustomfile}"
# Restart dnsmasq to update removed custom DNS entries
RestartDNS
}
main() { main() {
args=("$@") args=("$@")
@@ -589,7 +643,6 @@ main() {
"resolve" ) ResolutionSettings;; "resolve" ) ResolutionSettings;;
"addstaticdhcp" ) AddDHCPStaticAddress;; "addstaticdhcp" ) AddDHCPStaticAddress;;
"removestaticdhcp" ) RemoveDHCPStaticAddress;; "removestaticdhcp" ) RemoveDHCPStaticAddress;;
"-r" | "hostrecord" ) SetHostRecord "$3";;
"-e" | "email" ) SetAdminEmail "$3";; "-e" | "email" ) SetAdminEmail "$3";;
"-i" | "interface" ) SetListeningMode "$@";; "-i" | "interface" ) SetListeningMode "$@";;
"-t" | "teleporter" ) Teleporter;; "-t" | "teleporter" ) Teleporter;;
@@ -597,6 +650,8 @@ main() {
"audit" ) addAudit "$@";; "audit" ) addAudit "$@";;
"clearaudit" ) clearAudit;; "clearaudit" ) clearAudit;;
"-l" | "privacylevel" ) SetPrivacyLevel;; "-l" | "privacylevel" ) SetPrivacyLevel;;
"addcustomdns" ) AddCustomDNSAddress;;
"removecustomdns" ) RemoveCustomDNSAddress;;
* ) helpFunc;; * ) helpFunc;;
esac esac

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# Pi-hole: A black hole for Internet advertisements # Pi-hole: A black hole for Internet advertisements
# (c) 2017 Pi-hole, LLC (https://pi-hole.net) # (c) 2017 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware. # Network-wide ad blocking via your own hardware.

View File

@@ -1,16 +1,21 @@
PRAGMA FOREIGN_KEYS=ON; PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE "group" CREATE TABLE "group"
( (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
enabled BOOLEAN NOT NULL DEFAULT 1, enabled BOOLEAN NOT NULL DEFAULT 1,
name TEXT NOT NULL, name TEXT UNIQUE NOT NULL,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
description TEXT description TEXT
); );
INSERT INTO "group" (id,enabled,name,description) VALUES (0,1,'Default','The default group');
CREATE TABLE whitelist CREATE TABLE domainlist
( (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
type INTEGER NOT NULL DEFAULT 0,
domain TEXT UNIQUE NOT NULL, domain TEXT UNIQUE NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT 1, enabled BOOLEAN NOT NULL DEFAULT 1,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)), date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
@@ -18,47 +23,6 @@ CREATE TABLE whitelist
comment TEXT comment TEXT
); );
CREATE TABLE whitelist_by_group
(
whitelist_id INTEGER NOT NULL REFERENCES whitelist (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (whitelist_id, group_id)
);
CREATE TABLE blacklist
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT UNIQUE NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT 1,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
comment TEXT
);
CREATE TABLE blacklist_by_group
(
blacklist_id INTEGER NOT NULL REFERENCES blacklist (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (blacklist_id, group_id)
);
CREATE TABLE regex
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT UNIQUE NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT 1,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
comment TEXT
);
CREATE TABLE regex_by_group
(
regex_id INTEGER NOT NULL REFERENCES regex (id),
group_id INTEGER NOT NULL REFERENCES "group" (id),
PRIMARY KEY (regex_id, group_id)
);
CREATE TABLE adlist CREATE TABLE adlist
( (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -78,65 +42,147 @@ CREATE TABLE adlist_by_group
CREATE TABLE gravity CREATE TABLE gravity
( (
domain TEXT PRIMARY KEY domain TEXT NOT NULL,
adlist_id INTEGER NOT NULL REFERENCES adlist (id)
); );
CREATE TABLE info CREATE TABLE info
( (
property TEXT PRIMARY KEY, property TEXT PRIMARY KEY,
value TEXT NOT NULL value TEXT NOT NULL
); );
INSERT INTO info VALUES("version","1"); INSERT INTO "info" VALUES('version','12');
CREATE VIEW vw_gravity AS SELECT domain CREATE TABLE domain_audit
FROM gravity (
WHERE domain NOT IN (SELECT domain from vw_whitelist); id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT UNIQUE NOT NULL,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int))
);
CREATE VIEW vw_whitelist AS SELECT DISTINCT domain CREATE TABLE domainlist_by_group
FROM whitelist (
LEFT JOIN whitelist_by_group ON whitelist_by_group.whitelist_id = whitelist.id domainlist_id INTEGER NOT NULL REFERENCES domainlist (id),
LEFT JOIN "group" ON "group".id = whitelist_by_group.group_id group_id INTEGER NOT NULL REFERENCES "group" (id),
WHERE whitelist.enabled = 1 AND (whitelist_by_group.group_id IS NULL OR "group".enabled = 1) PRIMARY KEY (domainlist_id, group_id)
ORDER BY whitelist.id; );
CREATE TRIGGER tr_whitelist_update AFTER UPDATE ON whitelist CREATE TABLE client
BEGIN (
UPDATE whitelist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain; id INTEGER PRIMARY KEY AUTOINCREMENT,
END; ip TEXT NOL NULL UNIQUE,
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
comment TEXT
);
CREATE VIEW vw_blacklist AS SELECT DISTINCT domain CREATE TABLE client_by_group
FROM blacklist (
LEFT JOIN blacklist_by_group ON blacklist_by_group.blacklist_id = blacklist.id client_id INTEGER NOT NULL REFERENCES client (id),
LEFT JOIN "group" ON "group".id = blacklist_by_group.group_id group_id INTEGER NOT NULL REFERENCES "group" (id),
WHERE blacklist.enabled = 1 AND (blacklist_by_group.group_id IS NULL OR "group".enabled = 1) PRIMARY KEY (client_id, group_id)
ORDER BY blacklist.id; );
CREATE TRIGGER tr_blacklist_update AFTER UPDATE ON blacklist
BEGIN
UPDATE blacklist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
CREATE VIEW vw_regex AS SELECT DISTINCT domain
FROM regex
LEFT JOIN regex_by_group ON regex_by_group.regex_id = regex.id
LEFT JOIN "group" ON "group".id = regex_by_group.group_id
WHERE regex.enabled = 1 AND (regex_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY regex.id;
CREATE TRIGGER tr_regex_update AFTER UPDATE ON regex
BEGIN
UPDATE regex SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
CREATE VIEW vw_adlist AS SELECT DISTINCT address
FROM adlist
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY adlist.id;
CREATE TRIGGER tr_adlist_update AFTER UPDATE ON adlist CREATE TRIGGER tr_adlist_update AFTER UPDATE ON adlist
BEGIN BEGIN
UPDATE adlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE address = NEW.address; UPDATE adlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE address = NEW.address;
END; END;
CREATE TRIGGER tr_client_update AFTER UPDATE ON client
BEGIN
UPDATE client SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE ip = NEW.ip;
END;
CREATE TRIGGER tr_domainlist_update AFTER UPDATE ON domainlist
BEGIN
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
END;
CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 0
ORDER BY domainlist.id;
CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 1
ORDER BY domainlist.id;
CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 2
ORDER BY domainlist.id;
CREATE VIEW vw_regex_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
FROM domainlist
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
AND domainlist.type = 3
ORDER BY domainlist.id;
CREATE VIEW vw_gravity AS SELECT domain, adlist_by_group.group_id AS group_id
FROM gravity
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = gravity.adlist_id
LEFT JOIN adlist ON adlist.id = gravity.adlist_id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1);
CREATE VIEW vw_adlist AS SELECT DISTINCT address, adlist.id AS id
FROM adlist
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY adlist.id;
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
BEGIN
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_client_add AFTER INSERT ON client
BEGIN
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
BEGIN
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
BEGIN
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
END;
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
BEGIN
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Default');
END;
CREATE TRIGGER tr_domainlist_delete AFTER DELETE ON domainlist
BEGIN
DELETE FROM domainlist_by_group WHERE domainlist_id = OLD.id;
END;
CREATE TRIGGER tr_adlist_delete AFTER DELETE ON adlist
BEGIN
DELETE FROM adlist_by_group WHERE adlist_id = OLD.id;
END;
CREATE TRIGGER tr_client_delete AFTER DELETE ON client
BEGIN
DELETE FROM client_by_group WHERE client_id = OLD.id;
END;
COMMIT;

View File

@@ -0,0 +1,42 @@
.timeout 30000
ATTACH DATABASE '/etc/pihole/gravity.db' AS OLD;
BEGIN TRANSACTION;
DROP TRIGGER tr_domainlist_add;
DROP TRIGGER tr_client_add;
DROP TRIGGER tr_adlist_add;
INSERT OR REPLACE INTO "group" SELECT * FROM OLD."group";
INSERT OR REPLACE INTO domain_audit SELECT * FROM OLD.domain_audit;
INSERT OR REPLACE INTO domainlist SELECT * FROM OLD.domainlist;
INSERT OR REPLACE INTO domainlist_by_group SELECT * FROM OLD.domainlist_by_group;
INSERT OR REPLACE INTO adlist SELECT * FROM OLD.adlist;
INSERT OR REPLACE INTO adlist_by_group SELECT * FROM OLD.adlist_by_group;
INSERT OR REPLACE INTO info SELECT * FROM OLD.info;
INSERT OR REPLACE INTO client SELECT * FROM OLD.client;
INSERT OR REPLACE INTO client_by_group SELECT * FROM OLD.client_by_group;
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
BEGIN
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_client_add AFTER INSERT ON client
BEGIN
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
END;
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
BEGIN
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
END;
COMMIT;

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
### BEGIN INIT INFO ### BEGIN INIT INFO
# Provides: pihole-FTL # Provides: pihole-FTL
# Required-Start: $remote_fs $syslog # Required-Start: $remote_fs $syslog
@@ -48,7 +48,8 @@ start() {
chown pihole:pihole /etc/pihole /etc/pihole/dhcp.leases 2> /dev/null chown pihole:pihole /etc/pihole /etc/pihole/dhcp.leases 2> /dev/null
chown pihole:pihole /var/log/pihole-FTL.log /var/log/pihole.log chown pihole:pihole /var/log/pihole-FTL.log /var/log/pihole.log
chmod 0644 /var/log/pihole-FTL.log /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole.log chmod 0644 /var/log/pihole-FTL.log /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole.log
echo "nameserver 127.0.0.1" | /sbin/resolvconf -a lo.piholeFTL # Chown database files to the user FTL runs as. We ignore errors as the files may not (yet) exist
chown pihole:pihole /etc/pihole/pihole-FTL.db /etc/pihole/gravity.db 2> /dev/null
if setcap CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_NET_ADMIN+eip "$(which pihole-FTL)"; then if setcap CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_NET_ADMIN+eip "$(which pihole-FTL)"; then
su -s /bin/sh -c "/usr/bin/pihole-FTL" "$FTLUSER" su -s /bin/sh -c "/usr/bin/pihole-FTL" "$FTLUSER"
else else
@@ -62,7 +63,6 @@ start() {
# Stop the service # Stop the service
stop() { stop() {
if is_running; then if is_running; then
/sbin/resolvconf -d lo.piholeFTL
kill "$(get_pid)" kill "$(get_pid)"
for i in {1..5}; do for i in {1..5}; do
if ! is_running; then if ! is_running; then

View File

@@ -15,7 +15,7 @@ _pihole() {
COMPREPLY=( $(compgen -W "${opts_lists}" -- ${cur}) ) COMPREPLY=( $(compgen -W "${opts_lists}" -- ${cur}) )
;; ;;
"admin") "admin")
opts_admin="celsius email fahrenheit hostrecord interface kelvin password privacylevel" opts_admin="celsius email fahrenheit interface kelvin password privacylevel"
COMPREPLY=( $(compgen -W "${opts_admin}" -- ${cur}) ) COMPREPLY=( $(compgen -W "${opts_admin}" -- ${cur}) )
;; ;;
"checkout") "checkout")

View File

@@ -6,8 +6,8 @@
* This file is copyright under the latest version of the EUPL. * This file is copyright under the latest version of the EUPL.
* Please see LICENSE file for your rights under this license. */ * Please see LICENSE file for your rights under this license. */
// Sanitise HTTP_HOST output // Sanitize SERVER_NAME output
$serverName = htmlspecialchars($_SERVER["HTTP_HOST"]); $serverName = htmlspecialchars($_SERVER["SERVER_NAME"]);
// Remove external ipv6 brackets if any // Remove external ipv6 brackets if any
$serverName = preg_replace('/^\[(.*)\]$/', '${1}', $serverName); $serverName = preg_replace('/^\[(.*)\]$/', '${1}', $serverName);
@@ -50,16 +50,24 @@ function setHeader($type = "x") {
} }
// Determine block page type // Determine block page type
if ($serverName === "pi.hole") { if ($serverName === "pi.hole"
|| (!empty($_SERVER["VIRTUAL_HOST"]) && $serverName === $_SERVER["VIRTUAL_HOST"])) {
// Redirect to Web Interface // Redirect to Web Interface
exit(header("Location: /admin")); exit(header("Location: /admin"));
} elseif (filter_var($serverName, FILTER_VALIDATE_IP) || in_array($serverName, $authorizedHosts)) { } elseif (filter_var($serverName, FILTER_VALIDATE_IP) || in_array($serverName, $authorizedHosts)) {
// Set Splash Page output // Set Splash Page output
$splashPage = " $splashPage = "
<html><head> <html>
<head>
$viewPort $viewPort
<link rel='stylesheet' href='/pihole/blockingpage.css' type='text/css'/> <link rel='stylesheet' href='pihole/blockingpage.css' type='text/css'/>
</head><body id='splashpage'><img src='/admin/img/logo.svg'/><br/>Pi-<b>hole</b>: Your black hole for Internet advertisements<br><a href='/admin'>Did you mean to go to the admin panel?</a></body></html> </head>
<body id='splashpage'>
<img src='admin/img/logo.svg'/><br/>
Pi-<b>hole</b>: Your black hole for Internet advertisements<br/>
<a href='/admin'>Did you mean to go to the admin panel?</a>
</body>
</html>
"; ";
// Set splash/landing page based off presence of $landPage // Set splash/landing page based off presence of $landPage
@@ -68,7 +76,7 @@ if ($serverName === "pi.hole") {
// Unset variables so as to not be included in $landPage // Unset variables so as to not be included in $landPage
unset($serverName, $svPasswd, $svEmail, $authorizedHosts, $validExtTypes, $currentUrlExt, $viewPort); unset($serverName, $svPasswd, $svEmail, $authorizedHosts, $validExtTypes, $currentUrlExt, $viewPort);
// Render splash/landing page when directly browsing via IP or authorised hostname // Render splash/landing page when directly browsing via IP or authorized hostname
exit($renderPage); exit($renderPage);
} elseif ($currentUrlExt === "js") { } elseif ($currentUrlExt === "js") {
// Serve Pi-hole Javascript for blocked domains requesting JS // Serve Pi-hole Javascript for blocked domains requesting JS
@@ -96,26 +104,30 @@ if ($serverName === "pi.hole") {
// Define admin email address text based off $svEmail presence // Define admin email address text based off $svEmail presence
$bpAskAdmin = !empty($svEmail) ? '<a href="mailto:'.$svEmail.'?subject=Site Blocked: '.$serverName.'"></a>' : "<span/>"; $bpAskAdmin = !empty($svEmail) ? '<a href="mailto:'.$svEmail.'?subject=Site Blocked: '.$serverName.'"></a>' : "<span/>";
// Determine if at least one block list has been generated // Get possible non-standard location of FTL's database
$blocklistglob = glob("/etc/pihole/list.0.*.domains"); $FTLsettings = parse_ini_file("/etc/pihole/pihole-FTL.conf");
if ($blocklistglob === array()) { if (isset($FTLsettings["GRAVITYDB"])) {
die("[ERROR] There are no domain lists generated lists within <code>/etc/pihole/</code>! Please update gravity by running <code>pihole -g</code>, or repair Pi-hole using <code>pihole -r</code>."); $gravityDBFile = $FTLsettings["GRAVITYDB"];
}
// Set location of adlists file
if (is_file("/etc/pihole/adlists.list")) {
$adLists = "/etc/pihole/adlists.list";
} elseif (is_file("/etc/pihole/adlists.default")) {
$adLists = "/etc/pihole/adlists.default";
} else { } else {
die("[ERROR] File not found: <code>/etc/pihole/adlists.list</code>"); $gravityDBFile = "/etc/pihole/gravity.db";
} }
// Get all URLs starting with "http" or "www" from adlists and re-index array numerically // Connect to gravity.db
$adlistsUrls = array_values(preg_grep("/(^http)|(^www)/i", file($adLists, FILE_IGNORE_NEW_LINES))); try {
$db = new SQLite3($gravityDBFile, SQLITE3_OPEN_READONLY);
} catch (Exception $exception) {
die("[ERROR]: Failed to connect to gravity.db");
}
// Get all adlist addresses
$adlistResults = $db->query("SELECT address FROM vw_adlist");
$adlistsUrls = array();
while ($row = $adlistResults->fetchArray()) {
array_push($adlistsUrls, $row[0]);
}
if (empty($adlistsUrls)) if (empty($adlistsUrls))
die("[ERROR]: There are no adlist URL's found within <code>$adLists</code>"); die("[ERROR]: There are no adlists enabled");
// Get total number of blocklists (Including Whitelist, Blacklist & Wildcard lists) // Get total number of blocklists (Including Whitelist, Blacklist & Wildcard lists)
$adlistsCount = count($adlistsUrls) + 3; $adlistsCount = count($adlistsUrls) + 3;
@@ -127,7 +139,12 @@ ini_set("default_socket_timeout", 3);
function queryAds($serverName) { function queryAds($serverName) {
// Determine the time it takes while querying adlists // Determine the time it takes while querying adlists
$preQueryTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]; $preQueryTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
$queryAds = file("http://127.0.0.1/admin/scripts/pi-hole/php/queryads.php?domain=$serverName&bp", FILE_IGNORE_NEW_LINES); $queryAdsURL = sprintf(
"http://127.0.0.1:%s/admin/scripts/pi-hole/php/queryads.php?domain=%s&bp",
$_SERVER["SERVER_PORT"],
$serverName
);
$queryAds = file($queryAdsURL, FILE_IGNORE_NEW_LINES);
$queryAds = array_values(array_filter(preg_replace("/data:\s+/", "", $queryAds))); $queryAds = array_values(array_filter(preg_replace("/data:\s+/", "", $queryAds)));
$queryTime = sprintf("%.0f", (microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]) - $preQueryTime); $queryTime = sprintf("%.0f", (microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]) - $preQueryTime);
@@ -205,7 +222,7 @@ $phVersion = exec("cd /etc/.pihole/ && git describe --long --tags");
if (explode("-", $phVersion)[1] != "0") if (explode("-", $phVersion)[1] != "0")
$execTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]; $execTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
// Please Note: Text is added via CSS to allow an admin to provide a localised // Please Note: Text is added via CSS to allow an admin to provide a localized
// language without the need to edit this file // language without the need to edit this file
setHeader(); setHeader();
@@ -222,10 +239,10 @@ setHeader();
<?=$viewPort ?> <?=$viewPort ?>
<meta name="robots" content="noindex,nofollow"/> <meta name="robots" content="noindex,nofollow"/>
<meta http-equiv="x-dns-prefetch-control" content="off"> <meta http-equiv="x-dns-prefetch-control" content="off">
<link rel="shortcut icon" href="//pi.hole/admin/img/favicon.png" type="image/x-icon"/> <link rel="shortcut icon" href="admin/img/favicon.png" type="image/x-icon"/>
<link rel="stylesheet" href="//pi.hole/pihole/blockingpage.css" type="text/css"/> <link rel="stylesheet" href="pihole/blockingpage.css" type="text/css"/>
<title>● <?=$serverName ?></title> <title>● <?=$serverName ?></title>
<script src="//pi.hole/admin/scripts/vendor/jquery.min.js"></script> <script src="admin/scripts/vendor/jquery.min.js"></script>
<script> <script>
window.onload = function () { window.onload = function () {
<?php <?php

View File

@@ -65,11 +65,11 @@ PI_HOLE_FILES=(chronometer list piholeDebug piholeLogFlush setupLCD update versi
# This directory is where the Pi-hole scripts will be installed # This directory is where the Pi-hole scripts will be installed
PI_HOLE_INSTALL_DIR="/opt/pihole" PI_HOLE_INSTALL_DIR="/opt/pihole"
PI_HOLE_CONFIG_DIR="/etc/pihole" PI_HOLE_CONFIG_DIR="/etc/pihole"
PI_HOLE_BIN_DIR="/usr/local/bin"
PI_HOLE_BLOCKPAGE_DIR="${webroot}/pihole" PI_HOLE_BLOCKPAGE_DIR="${webroot}/pihole"
useUpdateVars=false useUpdateVars=false
adlistFile="/etc/pihole/adlists.list" adlistFile="/etc/pihole/adlists.list"
regexFile="/etc/pihole/regex.list"
# Pi-hole needs an IP address; to begin, these variables are empty since we don't know what the IP is until # Pi-hole needs an IP address; to begin, these variables are empty since we don't know what the IP is until
# this script can run # this script can run
IPV4_ADDRESS="" IPV4_ADDRESS=""
@@ -84,8 +84,13 @@ if [ -z "${USER}" ]; then
fi fi
# Find the rows and columns will default to 80x24 if it can not be detected # Check if we are running on a real terminal and find the rows and columns
screen_size=$(stty size || printf '%d %d' 24 80) # If there is no real terminal, we will default to 80x24
if [ -t 0 ] ; then
screen_size=$(stty size)
else
screen_size="24 80"
fi
# Set rows variable to contain first number # Set rows variable to contain first number
printf -v rows '%d' "${screen_size%% *}" printf -v rows '%d' "${screen_size%% *}"
# Set columns variable to contain second number # Set columns variable to contain second number
@@ -133,9 +138,6 @@ else
OVER="\\r\\033[K" OVER="\\r\\033[K"
fi fi
# Define global binary variable
binary="tbd"
# A simple function that just echoes out our logo in ASCII format # A simple function that just echoes out our logo in ASCII format
# This lets users know that it is a Pi-hole, LLC product # This lets users know that it is a Pi-hole, LLC product
show_ascii_berry() { show_ascii_berry() {
@@ -186,10 +188,10 @@ if is_command apt-get ; then
# grep -c will return 1 retVal on 0 matches, block this throwing the set -e with an OR TRUE # grep -c will return 1 retVal on 0 matches, block this throwing the set -e with an OR TRUE
PKG_COUNT="${PKG_MANAGER} -s -o Debug::NoLocking=true upgrade | grep -c ^Inst || true" PKG_COUNT="${PKG_MANAGER} -s -o Debug::NoLocking=true upgrade | grep -c ^Inst || true"
# Some distros vary slightly so these fixes for dependencies may apply # Some distros vary slightly so these fixes for dependencies may apply
# on Ubuntu 18.04.1 LTS we need to add the universe repository to gain access to dialog and dhcpcd5 # on Ubuntu 18.04.1 LTS we need to add the universe repository to gain access to dhcpcd5
APT_SOURCES="/etc/apt/sources.list" APT_SOURCES="/etc/apt/sources.list"
if awk 'BEGIN{a=1;b=0}/bionic main/{a=0}/bionic.*universe/{b=1}END{exit a + b}' ${APT_SOURCES}; then if awk 'BEGIN{a=1;b=0}/bionic main/{a=0}/bionic.*universe/{b=1}END{exit a + b}' ${APT_SOURCES}; then
if ! whiptail --defaultno --title "Dependencies Require Update to Allowed Repositories" --yesno "Would you like to enable 'universe' repository?\\n\\nThis repository is required by the following packages:\\n\\n- dhcpcd5\\n- dialog" "${r}" "${c}"; then if ! whiptail --defaultno --title "Dependencies Require Update to Allowed Repositories" --yesno "Would you like to enable 'universe' repository?\\n\\nThis repository is required by the following packages:\\n\\n- dhcpcd5" "${r}" "${c}"; then
printf " %b Aborting installation: dependencies could not be installed.\\n" "${CROSS}" printf " %b Aborting installation: dependencies could not be installed.\\n" "${CROSS}"
exit # exit the installer exit # exit the installer
else else
@@ -240,12 +242,12 @@ if is_command apt-get ; then
fi fi
# Since our install script is so large, we need several other programs to successfully get a machine provisioned # Since our install script is so large, we need several other programs to successfully get a machine provisioned
# These programs are stored in an array so they can be looped through later # These programs are stored in an array so they can be looped through later
INSTALLER_DEPS=(apt-utils dialog debconf dhcpcd5 git "${iproute_pkg}" whiptail) INSTALLER_DEPS=(dhcpcd5 git "${iproute_pkg}" whiptail)
# Pi-hole itself has several dependencies that also need to be installed # Pi-hole itself has several dependencies that also need to be installed
PIHOLE_DEPS=(cron curl dnsutils iputils-ping lsof netcat psmisc sudo unzip wget idn2 sqlite3 libcap2-bin dns-root-data resolvconf libcap2) PIHOLE_DEPS=(cron curl dnsutils iputils-ping lsof netcat psmisc sudo unzip wget idn2 sqlite3 libcap2-bin dns-root-data libcap2)
# The Web dashboard has some that also need to be installed # The Web dashboard has some that also need to be installed
# It's useful to separate the two since our repos are also setup as "Core" code and "Web" code # It's useful to separate the two since our repos are also setup as "Core" code and "Web" code
PIHOLE_WEB_DEPS=(lighttpd "${phpVer}-common" "${phpVer}-cgi" "${phpVer}-${phpSqlite}") PIHOLE_WEB_DEPS=(lighttpd "${phpVer}-common" "${phpVer}-cgi" "${phpVer}-${phpSqlite}" "${phpVer}-xml" "php-intl")
# The Web server user, # The Web server user,
LIGHTTPD_USER="www-data" LIGHTTPD_USER="www-data"
# group, # group,
@@ -283,17 +285,16 @@ elif is_command rpm ; then
UPDATE_PKG_CACHE=":" UPDATE_PKG_CACHE=":"
PKG_INSTALL=("${PKG_MANAGER}" install -y) PKG_INSTALL=("${PKG_MANAGER}" install -y)
PKG_COUNT="${PKG_MANAGER} check-update | egrep '(.i686|.x86|.noarch|.arm|.src)' | wc -l" PKG_COUNT="${PKG_MANAGER} check-update | egrep '(.i686|.x86|.noarch|.arm|.src)' | wc -l"
INSTALLER_DEPS=(dialog git iproute newt procps-ng which chkconfig) INSTALLER_DEPS=(git iproute newt procps-ng which chkconfig)
PIHOLE_DEPS=(bind-utils cronie curl findutils nmap-ncat sudo unzip wget libidn2 psmisc sqlite libcap) PIHOLE_DEPS=(bind-utils cronie curl findutils nmap-ncat sudo unzip wget libidn2 psmisc sqlite libcap)
PIHOLE_WEB_DEPS=(lighttpd lighttpd-fastcgi php-common php-cli php-pdo) PIHOLE_WEB_DEPS=(lighttpd lighttpd-fastcgi php-common php-cli php-pdo php-xml php-json php-intl)
LIGHTTPD_USER="lighttpd" LIGHTTPD_USER="lighttpd"
LIGHTTPD_GROUP="lighttpd" LIGHTTPD_GROUP="lighttpd"
LIGHTTPD_CFG="lighttpd.conf.fedora" LIGHTTPD_CFG="lighttpd.conf.fedora"
# If the host OS is Fedora, # If the host OS is Fedora,
if grep -qiE 'fedora|fedberry' /etc/redhat-release; then if grep -qiE 'fedora|fedberry' /etc/redhat-release; then
# all required packages should be available by default with the latest fedora release # all required packages should be available by default with the latest fedora release
# ensure 'php-json' is installed on Fedora (installed as dependency on CentOS7 + Remi repository) : # continue
PIHOLE_WEB_DEPS+=('php-json')
# or if host OS is CentOS, # or if host OS is CentOS,
elif grep -qiE 'centos|scientific' /etc/redhat-release; then elif grep -qiE 'centos|scientific' /etc/redhat-release; then
# Pi-Hole currently supports CentOS 7+ with PHP7+ # Pi-Hole currently supports CentOS 7+ with PHP7+
@@ -308,7 +309,21 @@ elif is_command rpm ; then
# exit the installer # exit the installer
exit exit
fi fi
# on CentOS we need to add the EPEL repository to gain access to Fedora packages # php-json is not required on CentOS 7 as it is already compiled into php
# verifiy via `php -m | grep json`
if [[ $CURRENT_CENTOS_VERSION -eq 7 ]]; then
# create a temporary array as arrays are not designed for use as mutable data structures
CENTOS7_PIHOLE_WEB_DEPS=()
for i in "${!PIHOLE_WEB_DEPS[@]}"; do
if [[ ${PIHOLE_WEB_DEPS[i]} != "php-json" ]]; then
CENTOS7_PIHOLE_WEB_DEPS+=( "${PIHOLE_WEB_DEPS[i]}" )
fi
done
# re-assign the clean dependency array back to PIHOLE_WEB_DEPS
PIHOLE_WEB_DEPS=("${CENTOS7_PIHOLE_WEB_DEPS[@]}")
unset CENTOS7_PIHOLE_WEB_DEPS
fi
# CentOS requires the EPEL repository to gain access to Fedora packages
EPEL_PKG="epel-release" EPEL_PKG="epel-release"
rpm -q ${EPEL_PKG} &> /dev/null || rc=$? rpm -q ${EPEL_PKG} &> /dev/null || rc=$?
if [[ $rc -ne 0 ]]; then if [[ $rc -ne 0 ]]; then
@@ -374,16 +389,12 @@ is_repo() {
# Use a named, local variable instead of the vague $1, which is the first argument passed to this function # Use a named, local variable instead of the vague $1, which is the first argument passed to this function
# These local variables should always be lowercase # These local variables should always be lowercase
local directory="${1}" local directory="${1}"
# A local variable for the current directory
local curdir
# A variable to store the return code # A variable to store the return code
local rc local rc
# Assign the current directory variable by using pwd
curdir="${PWD}"
# If the first argument passed to this function is a directory, # If the first argument passed to this function is a directory,
if [[ -d "${directory}" ]]; then if [[ -d "${directory}" ]]; then
# move into the directory # move into the directory
cd "${directory}" pushd "${directory}" &> /dev/null || return 1
# Use git to check if the directory is a repo # Use git to check if the directory is a repo
# git -C is not used here to support git versions older than 1.8.4 # git -C is not used here to support git versions older than 1.8.4
git status --short &> /dev/null || rc=$? git status --short &> /dev/null || rc=$?
@@ -393,7 +404,7 @@ is_repo() {
rc=1 rc=1
fi fi
# Move back into the directory the user started in # Move back into the directory the user started in
cd "${curdir}" popd &> /dev/null || return 1
# Return the code; if one is not set, return 0 # Return the code; if one is not set, return 0
return "${rc:-0}" return "${rc:-0}"
} }
@@ -403,6 +414,7 @@ make_repo() {
# Set named variables for better readability # Set named variables for better readability
local directory="${1}" local directory="${1}"
local remoteRepo="${2}" local remoteRepo="${2}"
# The message to display when this function is running # The message to display when this function is running
str="Clone ${remoteRepo} into ${directory}" str="Clone ${remoteRepo} into ${directory}"
# Display the message and use the color table to preface the message with an "info" indicator # Display the message and use the color table to preface the message with an "info" indicator
@@ -416,10 +428,19 @@ make_repo() {
git clone -q --depth 20 "${remoteRepo}" "${directory}" &> /dev/null || return $? git clone -q --depth 20 "${remoteRepo}" "${directory}" &> /dev/null || return $?
# Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git) # Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git)
chmod -R a+rX "${directory}" chmod -R a+rX "${directory}"
# Move into the directory that was passed as an argument
pushd "${directory}" &> /dev/null || return 1
# Check current branch. If it is master, then reset to the latest availible tag.
# In case extra commits have been added after tagging/release (i.e in case of metadata updates/README.MD tweaks)
curBranch=$(git rev-parse --abbrev-ref HEAD)
if [[ "${curBranch}" == "master" ]]; then #If we're calling make_repo() then it should always be master, we may not need to check.
git reset --hard "$(git describe --abbrev=0 --tags)" || return $?
fi
# Show a colored message showing it's status # Show a colored message showing it's status
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
# Always return 0? Not sure this is correct
# Move back into the original directory
popd &> /dev/null || return 1
return 0 return 0
} }
@@ -430,17 +451,14 @@ update_repo() {
# but since they are local, their scope does not go beyond this function # but since they are local, their scope does not go beyond this function
# This helps prevent the wrong value from being assigned if you were to set the variable as a GLOBAL one # This helps prevent the wrong value from being assigned if you were to set the variable as a GLOBAL one
local directory="${1}" local directory="${1}"
local curdir local curBranch
# A variable to store the message we want to display; # A variable to store the message we want to display;
# Again, it's useful to store these in variables in case we need to reuse or change the message; # Again, it's useful to store these in variables in case we need to reuse or change the message;
# we only need to make one change here # we only need to make one change here
local str="Update repo in ${1}" local str="Update repo in ${1}"
# Make sure we know what directory we are in so we can move back into it
curdir="${PWD}"
# Move into the directory that was passed as an argument # Move into the directory that was passed as an argument
cd "${directory}" &> /dev/null || return 1 pushd "${directory}" &> /dev/null || return 1
# Let the user know what's happening # Let the user know what's happening
printf " %b %s..." "${INFO}" "${str}" printf " %b %s..." "${INFO}" "${str}"
# Stash any local commits as they conflict with our working code # Stash any local commits as they conflict with our working code
@@ -448,12 +466,18 @@ update_repo() {
git clean --quiet --force -d || true # Okay for already clean directory git clean --quiet --force -d || true # Okay for already clean directory
# Pull the latest commits # Pull the latest commits
git pull --quiet &> /dev/null || return $? git pull --quiet &> /dev/null || return $?
# Check current branch. If it is master, then reset to the latest availible tag.
# In case extra commits have been added after tagging/release (i.e in case of metadata updates/README.MD tweaks)
curBranch=$(git rev-parse --abbrev-ref HEAD)
if [[ "${curBranch}" == "master" ]]; then
git reset --hard "$(git describe --abbrev=0 --tags)" || return $?
fi
# Show a completion message # Show a completion message
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
# Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git) # Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git)
chmod -R a+rX "${directory}" chmod -R a+rX "${directory}"
# Move back into the original directory # Move back into the original directory
cd "${curdir}" &> /dev/null || return 1 popd &> /dev/null || return 1
return 0 return 0
} }
@@ -492,7 +516,7 @@ resetRepo() {
# Use named variables for arguments # Use named variables for arguments
local directory="${1}" local directory="${1}"
# Move into the directory # Move into the directory
cd "${directory}" &> /dev/null || return 1 pushd "${directory}" &> /dev/null || return 1
# Store the message in a variable # Store the message in a variable
str="Resetting repository within ${1}..." str="Resetting repository within ${1}..."
# Show the message # Show the message
@@ -503,6 +527,8 @@ resetRepo() {
chmod -R a+rX "${directory}" chmod -R a+rX "${directory}"
# And show the status # And show the status
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
# Return to where we came from
popd &> /dev/null || return 1
# Returning success anyway? # Returning success anyway?
return 0 return 0
} }
@@ -829,11 +855,12 @@ setDHCPCD() {
echo "interface ${PIHOLE_INTERFACE} echo "interface ${PIHOLE_INTERFACE}
static ip_address=${IPV4_ADDRESS} static ip_address=${IPV4_ADDRESS}
static routers=${IPv4gw} static routers=${IPv4gw}
static domain_name_servers=127.0.0.1" | tee -a /etc/dhcpcd.conf >/dev/null static domain_name_servers=${PIHOLE_DNS_1} ${PIHOLE_DNS_2}" | tee -a /etc/dhcpcd.conf >/dev/null
# Then use the ip command to immediately set the new address # Then use the ip command to immediately set the new address
ip addr replace dev "${PIHOLE_INTERFACE}" "${IPV4_ADDRESS}" ip addr replace dev "${PIHOLE_INTERFACE}" "${IPV4_ADDRESS}"
# Also give a warning that the user may need to reboot their system # Also give a warning that the user may need to reboot their system
printf " %b Set IP address to %s \\n You may need to restart after the install is complete\\n" "${TICK}" "${IPV4_ADDRESS%/*}" printf " %b Set IP address to %s\\n" "${TICK}" "${IPV4_ADDRESS%/*}"
printf " %b You may need to restart after the install is complete\\n" "${INFO}"
fi fi
} }
@@ -934,7 +961,7 @@ valid_ip() {
# and set the new one to a dot (period) # and set the new one to a dot (period)
IFS='.' IFS='.'
# Put the IP into an array # Put the IP into an array
ip=("${ip}") read -r -a ip <<< "${ip}"
# Restore the IFS to what it was # Restore the IFS to what it was
IFS=${OIFS} IFS=${OIFS}
## Evaluate each octet by checking if it's less than or equal to 255 (the max for each octet) ## Evaluate each octet by checking if it's less than or equal to 255 (the max for each octet)
@@ -979,8 +1006,6 @@ setDNS() {
# exit if Cancel is selected # exit if Cancel is selected
{ printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; } { printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; }
# Display the selection
printf " %b Using " "${INFO}"
# Depending on the user's choice, set the GLOBAl variables to the IP of the respective provider # Depending on the user's choice, set the GLOBAl variables to the IP of the respective provider
if [[ "${DNSchoices}" == "Custom" ]] if [[ "${DNSchoices}" == "Custom" ]]
then then
@@ -1032,14 +1057,14 @@ setDNS() {
if [[ "${PIHOLE_DNS_2}" == "${strInvalid}" ]]; then if [[ "${PIHOLE_DNS_2}" == "${strInvalid}" ]]; then
PIHOLE_DNS_2="" PIHOLE_DNS_2=""
fi fi
# Since the settings will not work, stay in the loop # Since the settings will not work, stay in the loop
DNSSettingsCorrect=False DNSSettingsCorrect=False
# Otherwise, # Otherwise,
else else
# Show the settings # Show the settings
if (whiptail --backtitle "Specify Upstream DNS Provider(s)" --title "Upstream DNS Provider(s)" --yesno "Are these settings correct?\\n DNS Server 1: $PIHOLE_DNS_1\\n DNS Server 2: ${PIHOLE_DNS_2}" "${r}" "${c}"); then if (whiptail --backtitle "Specify Upstream DNS Provider(s)" --title "Upstream DNS Provider(s)" --yesno "Are these settings correct?\\n DNS Server 1: $PIHOLE_DNS_1\\n DNS Server 2: ${PIHOLE_DNS_2}" "${r}" "${c}"); then
# and break from the loop since the servers are valid # and break from the loop since the servers are valid
DNSSettingsCorrect=True DNSSettingsCorrect=True
# Otherwise, # Otherwise,
else else
# If the settings are wrong, the loop continues # If the settings are wrong, the loop continues
@@ -1047,7 +1072,7 @@ setDNS() {
fi fi
fi fi
done done
else else
# Save the old Internal Field Separator in a variable # Save the old Internal Field Separator in a variable
OIFS=$IFS OIFS=$IFS
# and set the new one to newline # and set the new one to newline
@@ -1057,7 +1082,6 @@ setDNS() {
DNSName="$(cut -d';' -f1 <<< "${DNSServer}")" DNSName="$(cut -d';' -f1 <<< "${DNSServer}")"
if [[ "${DNSchoices}" == "${DNSName}" ]] if [[ "${DNSchoices}" == "${DNSName}" ]]
then then
printf "%s\\n" "${DNSName}"
PIHOLE_DNS_1="$(cut -d';' -f2 <<< "${DNSServer}")" PIHOLE_DNS_1="$(cut -d';' -f2 <<< "${DNSServer}")"
PIHOLE_DNS_2="$(cut -d';' -f3 <<< "${DNSServer}")" PIHOLE_DNS_2="$(cut -d';' -f3 <<< "${DNSServer}")"
break break
@@ -1066,6 +1090,11 @@ setDNS() {
# Restore the IFS to what it was # Restore the IFS to what it was
IFS=${OIFS} IFS=${OIFS}
fi fi
# Display final selection
local DNSIP=${PIHOLE_DNS_1}
[[ -z ${PIHOLE_DNS_2} ]] || DNSIP+=", ${PIHOLE_DNS_2}"
printf " %b Using upstream DNS: %s (%s)\\n" "${INFO}" "${DNSchoices}" "${DNSIP}"
} }
# Allow the user to enable/disable logging # Allow the user to enable/disable logging
@@ -1177,15 +1206,12 @@ chooseBlocklists() {
mv "${adlistFile}" "${adlistFile}.old" mv "${adlistFile}" "${adlistFile}.old"
fi fi
# Let user select (or not) blocklists via a checklist # Let user select (or not) blocklists via a checklist
cmd=(whiptail --separate-output --checklist "Pi-hole relies on third party lists in order to block ads.\\n\\nYou can use the suggestions below, and/or add your own after installation\\n\\nTo deselect any list, use the arrow keys and spacebar" "${r}" "${c}" 7) cmd=(whiptail --separate-output --checklist "Pi-hole relies on third party lists in order to block ads.\\n\\nYou can use the suggestions below, and/or add your own after installation\\n\\nTo deselect any list, use the arrow keys and spacebar" "${r}" "${c}" 5)
# In an array, show the options available (all off by default): # In an array, show the options available (all off by default):
options=(StevenBlack "StevenBlack's Unified Hosts List" on options=(StevenBlack "StevenBlack's Unified Hosts List" on
MalwareDom "MalwareDomains" on MalwareDom "MalwareDomains" on
Cameleon "Cameleon" on
ZeusTracker "ZeusTracker" on
DisconTrack "Disconnect.me Tracking" on DisconTrack "Disconnect.me Tracking" on
DisconAd "Disconnect.me Ads" on DisconAd "Disconnect.me Ads" on)
HostsFile "Hosts-file.net Ads" on)
# In a variable, show the choices available; exit if Cancel is selected # In a variable, show the choices available; exit if Cancel is selected
choices=$("${cmd[@]}" "${options[@]}" 2>&1 >/dev/tty) || { printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; rm "${adlistFile}" ;exit 1; } choices=$("${cmd[@]}" "${options[@]}" 2>&1 >/dev/tty) || { printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; rm "${adlistFile}" ;exit 1; }
@@ -1204,11 +1230,8 @@ appendToListsFile() {
case $1 in case $1 in
StevenBlack ) echo "https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts" >> "${adlistFile}";; StevenBlack ) echo "https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts" >> "${adlistFile}";;
MalwareDom ) echo "https://mirror1.malwaredomains.com/files/justdomains" >> "${adlistFile}";; MalwareDom ) echo "https://mirror1.malwaredomains.com/files/justdomains" >> "${adlistFile}";;
Cameleon ) echo "http://sysctl.org/cameleon/hosts" >> "${adlistFile}";;
ZeusTracker ) echo "https://zeustracker.abuse.ch/blocklist.php?download=domainblocklist" >> "${adlistFile}";;
DisconTrack ) echo "https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt" >> "${adlistFile}";; DisconTrack ) echo "https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt" >> "${adlistFile}";;
DisconAd ) echo "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt" >> "${adlistFile}";; DisconAd ) echo "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt" >> "${adlistFile}";;
HostsFile ) echo "https://hosts-file.net/ad_servers.txt" >> "${adlistFile}";;
esac esac
} }
@@ -1222,11 +1245,8 @@ installDefaultBlocklists() {
fi fi
appendToListsFile StevenBlack appendToListsFile StevenBlack
appendToListsFile MalwareDom appendToListsFile MalwareDom
appendToListsFile Cameleon
appendToListsFile ZeusTracker
appendToListsFile DisconTrack appendToListsFile DisconTrack
appendToListsFile DisconAd appendToListsFile DisconAd
appendToListsFile HostsFile
} }
# Check if /etc/dnsmasq.conf is from pi-hole. If so replace with an original and install new in .d directory # Check if /etc/dnsmasq.conf is from pi-hole. If so replace with an original and install new in .d directory
@@ -1349,7 +1369,7 @@ installScripts() {
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/*.sh install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/*.sh
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./automated\ install/uninstall.sh install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./automated\ install/uninstall.sh
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/COL_TABLE install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/COL_TABLE
install -o "${USER}" -Dm755 -t /usr/local/bin/ pihole install -o "${USER}" -Dm755 -t "${PI_HOLE_BIN_DIR}" pihole
install -Dm644 ./advanced/bash-completion/pihole /etc/bash_completion.d/pihole install -Dm644 ./advanced/bash-completion/pihole /etc/bash_completion.d/pihole
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}" printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
@@ -1382,11 +1402,6 @@ installConfigs() {
return 1 return 1
fi fi
fi fi
# Install an empty regex file
if [[ ! -f "${regexFile}" ]]; then
# Let PHP edit the regex file, if installed
install -o pihole -g "${LIGHTTPD_GROUP:-pihole}" -m 664 /dev/null "${regexFile}"
fi
# If the user chose to install the dashboard, # If the user chose to install the dashboard,
if [[ "${INSTALL_WEB_SERVER}" == true ]]; then if [[ "${INSTALL_WEB_SERVER}" == true ]]; then
# and if the Web server conf directory does not exist, # and if the Web server conf directory does not exist,
@@ -1624,20 +1639,23 @@ install_dependent_packages() {
# amount of download traffic. # amount of download traffic.
# NOTE: We may be able to use this installArray in the future to create a list of package that were # NOTE: We may be able to use this installArray in the future to create a list of package that were
# installed by us, and remove only the installed packages, and not the entire list. # installed by us, and remove only the installed packages, and not the entire list.
if is_command debconf-apt-progress ; then if is_command apt-get ; then
# For each package, # For each package,
for i in "$@"; do for i in "$@"; do
printf " %b Checking for %s..." "${INFO}" "${i}" printf " %b Checking for %s..." "${INFO}" "${i}"
if dpkg-query -W -f='${Status}' "${i}" 2>/dev/null | grep "ok installed" &> /dev/null; then if dpkg-query -W -f='${Status}' "${i}" 2>/dev/null | grep "ok installed" &> /dev/null; then
printf "%b %b Checking for %s\\n" "${OVER}" "${TICK}" "${i}" printf "%b %b Checking for %s\\n" "${OVER}" "${TICK}" "${i}"
else else
echo -e "${OVER} ${INFO} Checking for $i (will be installed)" printf "%b %b Checking for %s (will be installed)\\n" "${OVER}" "${INFO}" "${i}"
installArray+=("${i}") installArray+=("${i}")
fi fi
done done
if [[ "${#installArray[@]}" -gt 0 ]]; then if [[ "${#installArray[@]}" -gt 0 ]]; then
test_dpkg_lock test_dpkg_lock
debconf-apt-progress -- "${PKG_INSTALL[@]}" "${installArray[@]}" printf " %b Processing %s install(s) for: %s, please wait...\\n" "${INFO}" "${PKG_MANAGER}" "${installArray[*]}"
printf '%*s\n' "$columns" '' | tr " " -;
"${PKG_INSTALL[@]}" "${installArray[@]}"
printf '%*s\n' "$columns" '' | tr " " -;
return return
fi fi
printf "\\n" printf "\\n"
@@ -1648,14 +1666,17 @@ install_dependent_packages() {
for i in "$@"; do for i in "$@"; do
printf " %b Checking for %s..." "${INFO}" "${i}" printf " %b Checking for %s..." "${INFO}" "${i}"
if "${PKG_MANAGER}" -q list installed "${i}" &> /dev/null; then if "${PKG_MANAGER}" -q list installed "${i}" &> /dev/null; then
printf "%b %b Checking for %s" "${OVER}" "${TICK}" "${i}" printf "%b %b Checking for %s\\n" "${OVER}" "${TICK}" "${i}"
else else
printf "%b %b Checking for %s (will be installed)" "${OVER}" "${INFO}" "${i}" printf "%b %b Checking for %s (will be installed)\\n" "${OVER}" "${INFO}" "${i}"
installArray+=("${i}") installArray+=("${i}")
fi fi
done done
if [[ "${#installArray[@]}" -gt 0 ]]; then if [[ "${#installArray[@]}" -gt 0 ]]; then
"${PKG_INSTALL[@]}" "${installArray[@]}" &> /dev/null printf " %b Processing %s install(s) for: %s, please wait...\\n" "${INFO}" "${PKG_MANAGER}" "${installArray[*]}"
printf '%*s\n' "$columns" '' | tr " " -;
"${PKG_INSTALL[@]}" "${installArray[@]}"
printf '%*s\n' "$columns" '' | tr " " -;
return return
fi fi
printf "\\n" printf "\\n"
@@ -1690,7 +1711,7 @@ installPiholeWeb() {
# Otherwise, # Otherwise,
else else
# don't do anything # don't do anything
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}" printf "%b %b %s\\n" "${OVER}" "${INFO}" "${str}"
printf " No default index.lighttpd.html file found... not backing up\\n" printf " No default index.lighttpd.html file found... not backing up\\n"
fi fi
@@ -1702,13 +1723,13 @@ installPiholeWeb() {
# and copy in the pihole sudoers file # and copy in the pihole sudoers file
install -m 0640 ${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole.sudo /etc/sudoers.d/pihole install -m 0640 ${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole.sudo /etc/sudoers.d/pihole
# Add lighttpd user (OS dependent) to sudoers file # Add lighttpd user (OS dependent) to sudoers file
echo "${LIGHTTPD_USER} ALL=NOPASSWD: /usr/local/bin/pihole" >> /etc/sudoers.d/pihole echo "${LIGHTTPD_USER} ALL=NOPASSWD: ${PI_HOLE_BIN_DIR}/pihole" >> /etc/sudoers.d/pihole
# If the Web server user is lighttpd, # If the Web server user is lighttpd,
if [[ "$LIGHTTPD_USER" == "lighttpd" ]]; then if [[ "$LIGHTTPD_USER" == "lighttpd" ]]; then
# Allow executing pihole via sudo with Fedora # Allow executing pihole via sudo with Fedora
# Usually /usr/local/bin is not permitted as directory for sudoable programs # Usually /usr/local/bin ${PI_HOLE_BIN_DIR} is not permitted as directory for sudoable programs
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" >> /etc/sudoers.d/pihole echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:${PI_HOLE_BIN_DIR}" >> /etc/sudoers.d/pihole
fi fi
# Set the strict permissions on the file # Set the strict permissions on the file
chmod 0440 /etc/sudoers.d/pihole chmod 0440 /etc/sudoers.d/pihole
@@ -1759,45 +1780,6 @@ create_pihole_user() {
fi fi
} }
# Allow HTTP and DNS traffic
configureFirewall() {
printf "\\n"
# If a firewall is running,
if firewall-cmd --state &> /dev/null; then
# ask if the user wants to install Pi-hole's default firewall rules
whiptail --title "Firewall in use" --yesno "We have detected a running firewall\\n\\nPi-hole currently requires HTTP and DNS port access.\\n\\n\\n\\nInstall Pi-hole default firewall rules?" "${r}" "${c}" || \
{ printf " %b Not installing firewall rulesets.\\n" "${INFO}"; return 0; }
printf " %b Configuring FirewallD for httpd and pihole-FTL\\n" "${TICK}"
# Allow HTTP and DNS traffic
firewall-cmd --permanent --add-service=http --add-service=dns
# Reload the firewall to apply these changes
firewall-cmd --reload
return 0
# Check for proper kernel modules to prevent failure
elif modinfo ip_tables &> /dev/null && is_command iptables ; then
# If chain Policy is not ACCEPT or last Rule is not ACCEPT
# then check and insert our Rules above the DROP/REJECT Rule.
if iptables -S INPUT | head -n1 | grep -qv '^-P.*ACCEPT$' || iptables -S INPUT | tail -n1 | grep -qv '^-\(A\|P\).*ACCEPT$'; then
whiptail --title "Firewall in use" --yesno "We have detected a running firewall\\n\\nPi-hole currently requires HTTP and DNS port access.\\n\\n\\n\\nInstall Pi-hole default firewall rules?" "${r}" "${c}" || \
{ printf " %b Not installing firewall rulesets.\\n" "${INFO}"; return 0; }
printf " %b Installing new IPTables firewall rulesets\\n" "${TICK}"
# Check chain first, otherwise a new rule will duplicate old ones
iptables -C INPUT -p tcp -m tcp --dport 80 -j ACCEPT &> /dev/null || iptables -I INPUT 1 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -C INPUT -p tcp -m tcp --dport 53 -j ACCEPT &> /dev/null || iptables -I INPUT 1 -p tcp -m tcp --dport 53 -j ACCEPT
iptables -C INPUT -p udp -m udp --dport 53 -j ACCEPT &> /dev/null || iptables -I INPUT 1 -p udp -m udp --dport 53 -j ACCEPT
iptables -C INPUT -p tcp -m tcp --dport 4711:4720 -i lo -j ACCEPT &> /dev/null || iptables -I INPUT 1 -p tcp -m tcp --dport 4711:4720 -i lo -j ACCEPT
return 0
fi
# Otherwise,
else
# no firewall is running
printf " %b No active firewall detected.. skipping firewall configuration\\n" "${INFO}"
# so just exit
return 0
fi
printf " %b Skipping firewall configuration\\n" "${INFO}"
}
# #
finalExports() { finalExports() {
# If the Web interface is not set to be installed, # If the Web interface is not set to be installed,
@@ -1908,6 +1890,9 @@ installPihole() {
chmod a+rx /var/www/html chmod a+rx /var/www/html
# Give pihole access to the Web server group # Give pihole access to the Web server group
usermod -a -G ${LIGHTTPD_GROUP} pihole usermod -a -G ${LIGHTTPD_GROUP} pihole
# Give lighttpd access to the pihole group so the web interface can
# manage the gravity.db database
usermod -a -G pihole ${LIGHTTPD_USER}
# If the lighttpd command is executable, # If the lighttpd command is executable,
if is_command lighty-enable-mod ; then if is_command lighty-enable-mod ; then
# enable fastcgi and fastcgi-php # enable fastcgi and fastcgi-php
@@ -1945,10 +1930,6 @@ installPihole() {
# Check if dnsmasq is present. If so, disable it and back up any possible # Check if dnsmasq is present. If so, disable it and back up any possible
# config file # config file
disable_dnsmasq disable_dnsmasq
# Configure the firewall
if [[ "${useUpdateVars}" == false ]]; then
configureFirewall
fi
# install a man page entry for pihole # install a man page entry for pihole
install_manpage install_manpage
@@ -1959,20 +1940,42 @@ installPihole() {
# SELinux # SELinux
checkSelinux() { checkSelinux() {
# If the getenforce command exists, local DEFAULT_SELINUX
if is_command getenforce ; then local CURRENT_SELINUX
# Store the current mode in a variable local SELINUX_ENFORCING=0
enforceMode=$(getenforce) # Check if a SELinux configuration file exists
printf "\\n %b SELinux mode detected: %s\\n" "${INFO}" "${enforceMode}" if [[ -f /etc/selinux/config ]]; then
# If a SELinux configuration file was found, check the default SELinux mode.
# If it's enforcing, DEFAULT_SELINUX=$(awk -F= '/^SELINUX=/ {print $2}' /etc/selinux/config)
if [[ "${enforceMode}" == "Enforcing" ]]; then case "${DEFAULT_SELINUX,,}" in
# Explain Pi-hole does not support it yet enforcing)
whiptail --defaultno --title "SELinux Enforcing Detected" --yesno "SELinux is being ENFORCED on your system! \\n\\nPi-hole currently does not support SELinux, but you may still continue with the installation.\\n\\nNote: Web Admin will not be fully functional unless you set your policies correctly\\n\\nContinue installing Pi-hole?" "${r}" "${c}" || \ printf " %b %bDefault SELinux: %s%b\\n" "${CROSS}" "${COL_RED}" "${DEFAULT_SELINUX}" "${COL_NC}"
{ printf "\\n %bSELinux Enforcing detected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; } SELINUX_ENFORCING=1
printf " %b Continuing installation with SELinux Enforcing\\n" "${INFO}" ;;
printf " %b Please refer to official SELinux documentation to create a custom policy\\n" "${INFO}" *) # 'permissive' and 'disabled'
fi printf " %b %bDefault SELinux: %s%b\\n" "${TICK}" "${COL_GREEN}" "${DEFAULT_SELINUX}" "${COL_NC}"
;;
esac
# Check the current state of SELinux
CURRENT_SELINUX=$(getenforce)
case "${CURRENT_SELINUX,,}" in
enforcing)
printf " %b %bCurrent SELinux: %s%b\\n" "${CROSS}" "${COL_RED}" "${CURRENT_SELINUX}" "${COL_NC}"
SELINUX_ENFORCING=1
;;
*) # 'permissive' and 'disabled'
printf " %b %bCurrent SELinux: %s%b\\n" "${TICK}" "${COL_GREEN}" "${CURRENT_SELINUX}" "${COL_NC}"
;;
esac
else
echo -e " ${INFO} ${COL_GREEN}SELinux not detected${COL_NC}";
fi
# Exit the installer if any SELinux checks toggled the flag
if [[ "${SELINUX_ENFORCING}" -eq 1 ]] && [[ -z "${PIHOLE_SELINUX}" ]]; then
printf " Pi-hole does not provide an SELinux policy as the required changes modify the security of your system.\\n"
printf " Please refer to https://wiki.centos.org/HowTos/SELinux if SELinux is required for your deployment.\\n"
printf "\\n %bSELinux Enforcing detected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}";
exit 1;
fi fi
} }
@@ -2167,21 +2170,15 @@ clone_or_update_repos() {
} }
# Download FTL binary to random temp directory and install FTL binary # Download FTL binary to random temp directory and install FTL binary
# Disable directive for SC2120 a value _can_ be passed to this function, but it is passed from an external script that sources this one
# shellcheck disable=SC2120
FTLinstall() { FTLinstall() {
# Local, named variables # Local, named variables
local latesttag local latesttag
local str="Downloading and Installing FTL" local str="Downloading and Installing FTL"
printf " %b %s..." "${INFO}" "${str}" printf " %b %s..." "${INFO}" "${str}"
# Find the latest version tag for FTL
latesttag=$(curl -sI https://github.com/pi-hole/FTL/releases/latest | grep "Location" | awk -F '/' '{print $NF}')
# Tags should always start with v, check for that.
if [[ ! "${latesttag}" == v* ]]; then
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
printf " %bError: Unable to get latest release location from GitHub%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"
return 1
fi
# Move into the temp ftl directory # Move into the temp ftl directory
pushd "$(mktemp -d)" > /dev/null || { printf "Unable to make temporary directory for FTL binary download\\n"; return 1; } pushd "$(mktemp -d)" > /dev/null || { printf "Unable to make temporary directory for FTL binary download\\n"; return 1; }
@@ -2197,9 +2194,12 @@ FTLinstall() {
ftlBranch="master" ftlBranch="master"
fi fi
local binary
binary="${1}"
# Determine which version of FTL to download # Determine which version of FTL to download
if [[ "${ftlBranch}" == "master" ]];then if [[ "${ftlBranch}" == "master" ]];then
url="https://github.com/pi-hole/FTL/releases/download/${latesttag%$'\r'}" url="https://github.com/pi-hole/ftl/releases/latest/download"
else else
url="https://ftl.pi-hole.net/${ftlBranch}" url="https://ftl.pi-hole.net/${ftlBranch}"
fi fi
@@ -2275,6 +2275,8 @@ get_binary_name() {
local machine local machine
machine=$(uname -m) machine=$(uname -m)
local l_binary
local str="Detecting architecture" local str="Detecting architecture"
printf " %b %s..." "${INFO}" "${str}" printf " %b %s..." "${INFO}" "${str}"
# If the machine is arm or aarch # If the machine is arm or aarch
@@ -2290,24 +2292,30 @@ get_binary_name() {
if [[ "${lib}" == "/lib/ld-linux-aarch64.so.1" ]]; then if [[ "${lib}" == "/lib/ld-linux-aarch64.so.1" ]]; then
printf "%b %b Detected ARM-aarch64 architecture\\n" "${OVER}" "${TICK}" printf "%b %b Detected ARM-aarch64 architecture\\n" "${OVER}" "${TICK}"
# set the binary to be used # set the binary to be used
binary="pihole-FTL-aarch64-linux-gnu" l_binary="pihole-FTL-aarch64-linux-gnu"
# #
elif [[ "${lib}" == "/lib/ld-linux-armhf.so.3" ]]; then elif [[ "${lib}" == "/lib/ld-linux-armhf.so.3" ]]; then
# #
if [[ "${rev}" -gt 6 ]]; then if [[ "${rev}" -gt 6 ]]; then
printf "%b %b Detected ARM-hf architecture (armv7+)\\n" "${OVER}" "${TICK}" printf "%b %b Detected ARM-hf architecture (armv7+)\\n" "${OVER}" "${TICK}"
# set the binary to be used # set the binary to be used
binary="pihole-FTL-arm-linux-gnueabihf" l_binary="pihole-FTL-arm-linux-gnueabihf"
# Otherwise, # Otherwise,
else else
printf "%b %b Detected ARM-hf architecture (armv6 or lower) Using ARM binary\\n" "${OVER}" "${TICK}" printf "%b %b Detected ARM-hf architecture (armv6 or lower) Using ARM binary\\n" "${OVER}" "${TICK}"
# set the binary to be used # set the binary to be used
binary="pihole-FTL-arm-linux-gnueabi" l_binary="pihole-FTL-arm-linux-gnueabi"
fi fi
else else
printf "%b %b Detected ARM architecture\\n" "${OVER}" "${TICK}" if [[ -f "/.dockerenv" ]]; then
# set the binary to be used printf "%b %b Detected ARM architecture in docker\\n" "${OVER}" "${TICK}"
binary="pihole-FTL-arm-linux-gnueabi" # set the binary to be used
binary="pihole-FTL-armel-native"
else
printf "%b %b Detected ARM architecture\\n" "${OVER}" "${TICK}"
# set the binary to be used
binary="pihole-FTL-arm-linux-gnueabi"
fi
fi fi
elif [[ "${machine}" == "x86_64" ]]; then elif [[ "${machine}" == "x86_64" ]]; then
# This gives the architecture of packages dpkg installs (for example, "i386") # This gives the architecture of packages dpkg installs (for example, "i386")
@@ -2320,12 +2328,12 @@ get_binary_name() {
# in the past (see https://github.com/pi-hole/pi-hole/pull/2004) # in the past (see https://github.com/pi-hole/pi-hole/pull/2004)
if [[ "${dpkgarch}" == "i386" ]]; then if [[ "${dpkgarch}" == "i386" ]]; then
printf "%b %b Detected 32bit (i686) architecture\\n" "${OVER}" "${TICK}" printf "%b %b Detected 32bit (i686) architecture\\n" "${OVER}" "${TICK}"
binary="pihole-FTL-linux-x86_32" l_binary="pihole-FTL-linux-x86_32"
else else
# 64bit # 64bit
printf "%b %b Detected x86_64 architecture\\n" "${OVER}" "${TICK}" printf "%b %b Detected x86_64 architecture\\n" "${OVER}" "${TICK}"
# set the binary to be used # set the binary to be used
binary="pihole-FTL-linux-x86_64" l_binary="pihole-FTL-linux-x86_64"
fi fi
else else
# Something else - we try to use 32bit executable and warn the user # Something else - we try to use 32bit executable and warn the user
@@ -2336,13 +2344,13 @@ get_binary_name() {
else else
printf "%b %b Detected 32bit (i686) architecture\\n" "${OVER}" "${TICK}" printf "%b %b Detected 32bit (i686) architecture\\n" "${OVER}" "${TICK}"
fi fi
binary="pihole-FTL-linux-x86_32" l_binary="pihole-FTL-linux-x86_32"
fi fi
echo ${l_binary}
} }
FTLcheckUpdate() { FTLcheckUpdate() {
get_binary_name
#In the next section we check to see if FTL is already installed (in case of pihole -r). #In the next section we check to see if FTL is already installed (in case of pihole -r).
#If the installed version matches the latest version, then check the installed sha1sum of the binary vs the remote sha1sum. If they do not match, then download #If the installed version matches the latest version, then check the installed sha1sum of the binary vs the remote sha1sum. If they do not match, then download
printf " %b Checking for existing FTL binary...\\n" "${INFO}" printf " %b Checking for existing FTL binary...\\n" "${INFO}"
@@ -2358,6 +2366,9 @@ FTLcheckUpdate() {
ftlBranch="master" ftlBranch="master"
fi fi
local binary
binary="${1}"
local remoteSha1 local remoteSha1
local localSha1 local localSha1
@@ -2400,7 +2411,12 @@ FTLcheckUpdate() {
local FTLversion local FTLversion
FTLversion=$(/usr/bin/pihole-FTL tag) FTLversion=$(/usr/bin/pihole-FTL tag)
local FTLlatesttag local FTLlatesttag
FTLlatesttag=$(curl -sI https://github.com/pi-hole/FTL/releases/latest | grep 'Location' | awk -F '/' '{print $NF}' | tr -d '\r\n')
if ! FTLlatesttag=$(curl -sI https://github.com/pi-hole/FTL/releases/latest | grep --color=never -i Location | awk -F / '{print $NF}' | tr -d '[:cntrl:]'); then
# There was an issue while retrieving the latest version
printf " %b Failed to retrieve latest FTL release metadata" "${CROSS}"
return 3
fi
if [[ "${FTLversion}" != "${FTLlatesttag}" ]]; then if [[ "${FTLversion}" != "${FTLlatesttag}" ]]; then
return 0 return 0
@@ -2428,8 +2444,10 @@ FTLcheckUpdate() {
FTLdetect() { FTLdetect() {
printf "\\n %b FTL Checks...\\n\\n" "${INFO}" printf "\\n %b FTL Checks...\\n\\n" "${INFO}"
if FTLcheckUpdate ; then printf " %b" "${2}"
FTLinstall || return 1
if FTLcheckUpdate "${1}"; then
FTLinstall "${1}" || return 1
fi fi
} }
@@ -2592,8 +2610,15 @@ main() {
fi fi
# Create the pihole user # Create the pihole user
create_pihole_user create_pihole_user
# Check if FTL is installed - do this early on as FTL is a hard dependency for Pi-hole # Check if FTL is installed - do this early on as FTL is a hard dependency for Pi-hole
if ! FTLdetect; then local funcOutput
funcOutput=$(get_binary_name) #Store output of get_binary_name here
local binary
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
local theRest
theRest="${funcOutput%pihole-FTL*}" # Print the rest of get_binary_name's output to display (cut out from first instance of "pihole-FTL")
if ! FTLdetect "${binary}" "${theRest}"; then
printf " %b FTL Engine not installed\\n" "${CROSS}" printf " %b FTL Engine not installed\\n" "${CROSS}"
exit 1 exit 1
fi fi
@@ -2686,7 +2711,7 @@ main() {
if [[ "${INSTALL_TYPE}" == "Update" ]]; then if [[ "${INSTALL_TYPE}" == "Update" ]]; then
printf "\\n" printf "\\n"
/usr/local/bin/pihole version --current "${PI_HOLE_BIN_DIR}"/pihole version --current
fi fi
} }

View File

@@ -17,6 +17,8 @@ coltable="/opt/pihole/COL_TABLE"
source "${coltable}" source "${coltable}"
regexconverter="/opt/pihole/wildcard_regex_converter.sh" regexconverter="/opt/pihole/wildcard_regex_converter.sh"
source "${regexconverter}" source "${regexconverter}"
# shellcheck disable=SC1091
source "/etc/.pihole/advanced/Scripts/database_migration/gravity-db.sh"
basename="pihole" basename="pihole"
PIHOLE_COMMAND="/usr/local/bin/${basename}" PIHOLE_COMMAND="/usr/local/bin/${basename}"
@@ -34,13 +36,12 @@ VPNList="/etc/openvpn/ipp.txt"
piholeGitDir="/etc/.pihole" piholeGitDir="/etc/.pihole"
gravityDBfile="${piholeDir}/gravity.db" gravityDBfile="${piholeDir}/gravity.db"
gravityTEMPfile="${piholeDir}/gravity_temp.db"
gravityDBschema="${piholeGitDir}/advanced/Templates/gravity.db.sql" gravityDBschema="${piholeGitDir}/advanced/Templates/gravity.db.sql"
gravityDBcopy="${piholeGitDir}/advanced/Templates/gravity_copy.sql"
optimize_database=false optimize_database=false
domainsExtension="domains" domainsExtension="domains"
matterAndLight="${basename}.0.matterandlight.txt"
parsedMatter="${basename}.1.parsedmatter.txt"
preEventHorizon="list.preEventHorizon"
resolver="pihole-FTL" resolver="pihole-FTL"
@@ -81,103 +82,171 @@ fi
# Generate new sqlite3 file from schema template # Generate new sqlite3 file from schema template
generate_gravity_database() { generate_gravity_database() {
sqlite3 "${gravityDBfile}" < "${gravityDBschema}" sqlite3 "${1}" < "${gravityDBschema}"
}
# Copy data from old to new database file and swap them
gravity_swap_databases() {
local str
str="Building tree"
echo -ne " ${INFO} ${str}..."
# The index is intentionally not UNIQUE as prro quality adlists may contain domains more than once
output=$( { sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to build gravity tree in ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
str="Swapping databases"
echo -ne " ${INFO} ${str}..."
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBcopy}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
# Swap databases and remove old database
rm "${gravityDBfile}"
mv "${gravityTEMPfile}" "${gravityDBfile}"
}
# Update timestamp when the gravity table was last updated successfully
update_gravity_timestamp() {
output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to update gravity timestamp in database ${gravityDBfile}\\n ${output}"
return 1
fi
return 0
} }
# Import domains from file and store them in the specified database table # Import domains from file and store them in the specified database table
database_table_from_file() { database_table_from_file() {
# Define locals # Define locals
local table source backup_path backup_file local table source backup_path backup_file tmpFile type
table="${1}" table="${1}"
source="${2}" source="${2}"
backup_path="${piholeDir}/migration_backup" backup_path="${piholeDir}/migration_backup"
backup_file="${backup_path}/$(basename "${2}")" backup_file="${backup_path}/$(basename "${2}")"
# Truncate table
output=$( { sqlite3 "${gravityDBfile}" <<< "DELETE FROM ${table};"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to truncate ${table} database ${gravityDBfile}\\n ${output}"
gravity_Cleanup "error"
fi
local tmpFile
tmpFile="$(mktemp -p "/tmp" --suffix=".gravity")" tmpFile="$(mktemp -p "/tmp" --suffix=".gravity")"
local timestamp local timestamp
timestamp="$(date --utc +'%s')" timestamp="$(date --utc +'%s')"
local inputfile
if [[ "${table}" == "gravity" ]]; then local rowid
# No need to modify the input data for the gravity table declare -i rowid
inputfile="${source}" rowid=1
else
# Apply format for white-, blacklist, regex, and adlist tables # Special handling for domains to be imported into the common domainlist table
local rowid if [[ "${table}" == "whitelist" ]]; then
declare -i rowid type="0"
rowid=1 table="domainlist"
# Read file line by line elif [[ "${table}" == "blacklist" ]]; then
grep -v '^ *#' < "${source}" | while IFS= read -r domain type="1"
do table="domainlist"
# Only add non-empty lines elif [[ "${table}" == "regex" ]]; then
if [[ ! -z "${domain}" ]]; then type="3"
echo "${rowid},\"${domain}\",1,${timestamp},${timestamp},\"Migrated from ${source}\"" >> "${tmpFile}" table="domainlist"
rowid+=1
fi
done
inputfile="${tmpFile}"
fi fi
# Get MAX(id) from domainlist when INSERTing into this table
if [[ "${table}" == "domainlist" ]]; then
rowid="$(sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")"
if [[ -z "$rowid" ]]; then
rowid=0
fi
rowid+=1
fi
# Loop over all domains in ${source} file
# Read file line by line
grep -v '^ *#' < "${source}" | while IFS= read -r domain
do
# Only add non-empty lines
if [[ -n "${domain}" ]]; then
if [[ "${table}" == "domain_audit" ]]; then
# domain_audit table format (no enable or modified fields)
echo "${rowid},\"${domain}\",${timestamp}" >> "${tmpFile}"
elif [[ "${table}" == "adlist" ]]; then
# Adlist table format
echo "${rowid},\"${domain}\",1,${timestamp},${timestamp},\"Migrated from ${source}\"" >> "${tmpFile}"
else
# White-, black-, and regexlist table format
echo "${rowid},${type},\"${domain}\",1,${timestamp},${timestamp},\"Migrated from ${source}\"" >> "${tmpFile}"
fi
rowid+=1
fi
done
# Store domains in database table specified by ${table} # Store domains in database table specified by ${table}
# Use printf as .mode and .import need to be on separate lines # Use printf as .mode and .import need to be on separate lines
# see https://unix.stackexchange.com/a/445615/83260 # see https://unix.stackexchange.com/a/445615/83260
output=$( { printf ".timeout 10000\\n.mode csv\\n.import \"%s\" %s\\n" "${inputfile}" "${table}" | sqlite3 "${gravityDBfile}"; } 2>&1 ) output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?" status="$?"
if [[ "${status}" -ne 0 ]]; then if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to fill table ${table} in database ${gravityDBfile}\\n ${output}" echo -e "\\n ${CROSS} Unable to fill table ${table}${type} in database ${gravityDBfile}\\n ${output}"
gravity_Cleanup "error" gravity_Cleanup "error"
fi fi
# Delete tmpfile
rm "${tmpFile}" > /dev/null 2>&1 || \
echo -e " ${CROSS} Unable to remove ${tmpFile}"
# Move source file to backup directory, create directory if not existing # Move source file to backup directory, create directory if not existing
mkdir -p "${backup_path}" mkdir -p "${backup_path}"
mv "${source}" "${backup_file}" 2> /dev/null || \ mv "${source}" "${backup_file}" 2> /dev/null || \
echo -e " ${CROSS} Unable to backup ${source} to ${backup_path}" echo -e " ${CROSS} Unable to backup ${source} to ${backup_path}"
# Delete tmpFile
rm "${tmpFile}" > /dev/null 2>&1 || \
echo -e " ${CROSS} Unable to remove ${tmpFile}"
} }
# Migrate pre-v5.0 list files to database-based Pi-hole versions # Migrate pre-v5.0 list files to database-based Pi-hole versions
migrate_to_database() { migrate_to_database() {
# Create database file only if not present # Create database file only if not present
if [ -e "${gravityDBfile}" ]; then if [ ! -e "${gravityDBfile}" ]; then
return 0 # Create new database file - note that this will be created in version 1
echo -e " ${INFO} Creating new gravity database"
generate_gravity_database "${gravityDBfile}"
# Check if gravity database needs to be updated
upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
# Migrate list files to new database
if [ -e "${adListFile}" ]; then
# Store adlist domains in database
echo -e " ${INFO} Migrating content of ${adListFile} into new database"
database_table_from_file "adlist" "${adListFile}"
fi
if [ -e "${blacklistFile}" ]; then
# Store blacklisted domains in database
echo -e " ${INFO} Migrating content of ${blacklistFile} into new database"
database_table_from_file "blacklist" "${blacklistFile}"
fi
if [ -e "${whitelistFile}" ]; then
# Store whitelisted domains in database
echo -e " ${INFO} Migrating content of ${whitelistFile} into new database"
database_table_from_file "whitelist" "${whitelistFile}"
fi
if [ -e "${regexFile}" ]; then
# Store regex domains in database
# Important note: We need to add the domains to the "regex" table
# as it will only later be renamed to "regex_blacklist"!
echo -e " ${INFO} Migrating content of ${regexFile} into new database"
database_table_from_file "regex" "${regexFile}"
fi
fi fi
echo -e " ${INFO} Creating new gravity database" # Check if gravity database needs to be updated
generate_gravity_database upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
# Migrate list files to new database
if [[ -e "${adListFile}" ]]; then
# Store adlist domains in database
echo -e " ${INFO} Migrating content of ${adListFile} into new database"
database_table_from_file "adlist" "${adListFile}"
fi
if [[ -e "${blacklistFile}" ]]; then
# Store blacklisted domains in database
echo -e " ${INFO} Migrating content of ${blacklistFile} into new database"
database_table_from_file "blacklist" "${blacklistFile}"
fi
if [[ -e "${whitelistFile}" ]]; then
# Store whitelisted domains in database
echo -e " ${INFO} Migrating content of ${whitelistFile} into new database"
database_table_from_file "whitelist" "${whitelistFile}"
fi
if [[ -e "${regexFile}" ]]; then
# Store regex domains in database
echo -e " ${INFO} Migrating content of ${regexFile} into new database"
database_table_from_file "regex" "${regexFile}"
fi
} }
# Determine if DNS resolution is available before proceeding # Determine if DNS resolution is available before proceeding
@@ -190,7 +259,7 @@ gravity_CheckDNSResolutionAvailable() {
fi fi
# Determine if $lookupDomain is resolvable # Determine if $lookupDomain is resolvable
if timeout 1 getent hosts "${lookupDomain}" &> /dev/null; then if timeout 4 getent hosts "${lookupDomain}" &> /dev/null; then
# Print confirmation of resolvability if it had previously failed # Print confirmation of resolvability if it had previously failed
if [[ -n "${secs:-}" ]]; then if [[ -n "${secs:-}" ]]; then
echo -e "${OVER} ${TICK} DNS resolution is now available\\n" echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
@@ -204,7 +273,7 @@ gravity_CheckDNSResolutionAvailable() {
# If the /etc/resolv.conf contains resolvers other than 127.0.0.1 then the local dnsmasq will not be queried and pi.hole is NXDOMAIN. # If the /etc/resolv.conf contains resolvers other than 127.0.0.1 then the local dnsmasq will not be queried and pi.hole is NXDOMAIN.
# This means that even though name resolution is working, the getent hosts check fails and the holddown timer keeps ticking and eventualy fails # This means that even though name resolution is working, the getent hosts check fails and the holddown timer keeps ticking and eventualy fails
# So we check the output of the last command and if it failed, attempt to use dig +short as a fallback # So we check the output of the last command and if it failed, attempt to use dig +short as a fallback
if timeout 1 dig +short "${lookupDomain}" &> /dev/null; then if timeout 4 dig +short "${lookupDomain}" &> /dev/null; then
if [[ -n "${secs:-}" ]]; then if [[ -n "${secs:-}" ]]; then
echo -e "${OVER} ${TICK} DNS resolution is now available\\n" echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
fi fi
@@ -237,12 +306,13 @@ gravity_CheckDNSResolutionAvailable() {
} }
# Retrieve blocklist URLs and parse domains from adlist.list # Retrieve blocklist URLs and parse domains from adlist.list
gravity_GetBlocklistUrls() { gravity_DownloadBlocklists() {
echo -e " ${INFO} ${COL_BOLD}Neutrino emissions detected${COL_NC}..." echo -e " ${INFO} ${COL_BOLD}Neutrino emissions detected${COL_NC}..."
# Retrieve source URLs from gravity database # Retrieve source URLs from gravity database
# We source only enabled adlists, sqlite3 stores boolean values as 0 (false) or 1 (true) # We source only enabled adlists, sqlite3 stores boolean values as 0 (false) or 1 (true)
mapfile -t sources <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)" mapfile -t sources <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
mapfile -t sourceIDs <<< "$(sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)"
# Parse source domains from $sources # Parse source domains from $sources
mapfile -t sourceDomains <<< "$( mapfile -t sourceDomains <<< "$(
@@ -259,21 +329,32 @@ gravity_GetBlocklistUrls() {
if [[ -n "${sources[*]}" ]] && [[ -n "${sourceDomains[*]}" ]]; then if [[ -n "${sources[*]}" ]] && [[ -n "${sourceDomains[*]}" ]]; then
echo -e "${OVER} ${TICK} ${str}" echo -e "${OVER} ${TICK} ${str}"
return 0
else else
echo -e "${OVER} ${CROSS} ${str}" echo -e "${OVER} ${CROSS} ${str}"
echo -e " ${INFO} No source list found, or it is empty" echo -e " ${INFO} No source list found, or it is empty"
echo "" echo ""
return 1 return 1
fi fi
}
# Define options for when retrieving blocklists
gravity_SetDownloadOptions() {
local url domain agent cmd_ext str
local url domain agent cmd_ext str target
echo "" echo ""
# Prepare new gravity database
str="Preparing new gravity database"
echo -ne " ${INFO} ${str}..."
rm "${gravityTEMPfile}" > /dev/null 2>&1
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to create new database ${gravityTEMPfile}\\n ${output}"
gravity_Cleanup "error"
else
echo -e "${OVER} ${TICK} ${str}"
fi
target="$(mktemp -p "/tmp" --suffix=".gravity")"
# Loop through $sources and download each one # Loop through $sources and download each one
for ((i = 0; i < "${#sources[@]}"; i++)); do for ((i = 0; i < "${#sources[@]}"; i++)); do
url="${sources[$i]}" url="${sources[$i]}"
@@ -292,16 +373,90 @@ gravity_SetDownloadOptions() {
*) cmd_ext="";; *) cmd_ext="";;
esac esac
echo -e " ${INFO} Target: ${domain} (${url##*/})" echo -e " ${INFO} Target: ${url}"
gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}" local regex
# Check for characters NOT allowed in URLs
regex="[^a-zA-Z0-9:/?&%=~._()-]"
if [[ "${url}" =~ ${regex} ]]; then
echo -e " ${CROSS} Invalid Target"
else
gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}" "${sourceIDs[$i]}" "${saveLocation}" "${target}"
fi
echo "" echo ""
done done
str="Storing downloaded domains in new gravity database"
echo -ne " ${INFO} ${str}..."
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | sqlite3 "${gravityTEMPfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to fill gravity table in database ${gravityTEMPfile}\\n ${output}"
gravity_Cleanup "error"
else
echo -e "${OVER} ${TICK} ${str}"
fi
if [[ "${status}" -eq 0 && -n "${output}" ]]; then
echo -e " Encountered non-critical SQL warnings. Please check the suitability of the lists you're using!\\n\\n SQL warnings:"
local warning file line lineno
while IFS= read -r line; do
echo " - ${line}"
warning="$(grep -oh "^[^:]*:[0-9]*" <<< "${line}")"
file="${warning%:*}"
lineno="${warning#*:}"
if [[ -n "${file}" && -n "${lineno}" ]]; then
echo -n " Line contains: "
awk "NR==${lineno}" < "${file}"
fi
done <<< "${output}"
echo ""
fi
rm "${target}" > /dev/null 2>&1 || \
echo -e " ${CROSS} Unable to remove ${target}"
gravity_Blackbody=true gravity_Blackbody=true
} }
total_num=0
parseList() {
local adlistID="${1}" src="${2}" target="${3}" incorrect_lines
# This sed does the following things:
# 1. Remove all domains containing invalid characters. Valid are: a-z, A-Z, 0-9, dot (.), minus (-), underscore (_)
# 2. Append ,adlistID to every line
# 3. Ensures there is a newline on the last line
sed -e "/[^a-zA-Z0-9.\_-]/d;s/$/,${adlistID}/;/.$/a\\" "${src}" >> "${target}"
# Find (up to) five domains containing invalid characters (see above)
incorrect_lines="$(sed -e "/[^a-zA-Z0-9.\_-]/!d" "${src}" | head -n 5)"
local num_lines num_target_lines num_correct_lines num_invalid
# Get number of lines in source file
num_lines="$(grep -c "^" "${src}")"
# Get number of lines in destination file
num_target_lines="$(grep -c "^" "${target}")"
num_correct_lines="$(( num_target_lines-total_num ))"
total_num="$num_target_lines"
num_invalid="$(( num_lines-num_correct_lines ))"
if [[ "${num_invalid}" -eq 0 ]]; then
echo " ${INFO} Received ${num_lines} domains"
else
echo " ${INFO} Received ${num_lines} domains, ${num_invalid} domains invalid!"
fi
# Display sample of invalid lines if we found some
if [[ -n "${incorrect_lines}" ]]; then
echo " Sample of invalid domains:"
while IFS= read -r line; do
echo " - ${line}"
done <<< "${incorrect_lines}"
fi
}
# Download specified URL and perform checks on HTTP status and file content # Download specified URL and perform checks on HTTP status and file content
gravity_DownloadBlocklistFromUrl() { gravity_DownloadBlocklistFromUrl() {
local url="${1}" cmd_ext="${2}" agent="${3}" heisenbergCompensator="" patternBuffer str httpCode success="" local url="${1}" cmd_ext="${2}" agent="${3}" adlistID="${4}" saveLocation="${5}" target="${6}"
local heisenbergCompensator="" patternBuffer str httpCode success=""
# Create temp file to store content on disk instead of RAM # Create temp file to store content on disk instead of RAM
patternBuffer=$(mktemp -p "/tmp" --suffix=".phgpb") patternBuffer=$(mktemp -p "/tmp" --suffix=".phgpb")
@@ -339,7 +494,7 @@ gravity_DownloadBlocklistFromUrl() {
else else
printf -v port "%s" "${PIHOLE_DNS_1#*#}" printf -v port "%s" "${PIHOLE_DNS_1#*#}"
fi fi
ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}") ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}" | tail -1)
if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then
port=443; port=443;
else port=80 else port=80
@@ -382,11 +537,14 @@ gravity_DownloadBlocklistFromUrl() {
# Determine if the blocklist was downloaded and saved correctly # Determine if the blocklist was downloaded and saved correctly
if [[ "${success}" == true ]]; then if [[ "${success}" == true ]]; then
if [[ "${httpCode}" == "304" ]]; then if [[ "${httpCode}" == "304" ]]; then
: # Do not attempt to re-parse file # Add domains to database table file
parseList "${adlistID}" "${saveLocation}" "${target}"
# Check if $patternbuffer is a non-zero length file # Check if $patternbuffer is a non-zero length file
elif [[ -s "${patternBuffer}" ]]; then elif [[ -s "${patternBuffer}" ]]; then
# Determine if blocklist is non-standard and parse as appropriate # Determine if blocklist is non-standard and parse as appropriate
gravity_ParseFileIntoDomains "${patternBuffer}" "${saveLocation}" gravity_ParseFileIntoDomains "${patternBuffer}" "${saveLocation}"
# Add domains to database table file
parseList "${adlistID}" "${saveLocation}" "${target}"
else else
# Fall back to previously cached list if $patternBuffer is empty # Fall back to previously cached list if $patternBuffer is empty
echo -e " ${INFO} Received empty file: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}" echo -e " ${INFO} Received empty file: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
@@ -395,6 +553,8 @@ gravity_DownloadBlocklistFromUrl() {
# Determine if cached list has read permission # Determine if cached list has read permission
if [[ -r "${saveLocation}" ]]; then if [[ -r "${saveLocation}" ]]; then
echo -e " ${CROSS} List download failed: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}" echo -e " ${CROSS} List download failed: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
# Add domains to database table file
parseList "${adlistID}" "${saveLocation}" "${target}"
else else
echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}" echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}"
fi fi
@@ -403,27 +563,29 @@ gravity_DownloadBlocklistFromUrl() {
# Parse source files into domains format # Parse source files into domains format
gravity_ParseFileIntoDomains() { gravity_ParseFileIntoDomains() {
local source="${1}" destination="${2}" firstLine abpFilter local source="${1}" destination="${2}" firstLine
# Determine if we are parsing a consolidated list # Determine if we are parsing a consolidated list
if [[ "${source}" == "${piholeDir}/${matterAndLight}" ]]; then #if [[ "${source}" == "${piholeDir}/${matterAndLight}" ]]; then
# Remove comments and print only the domain name # Remove comments and print only the domain name
# Most of the lists downloaded are already in hosts file format but the spacing/formating is not contigious # Most of the lists downloaded are already in hosts file format but the spacing/formating is not contigious
# This helps with that and makes it easier to read # This helps with that and makes it easier to read
# It also helps with debugging so each stage of the script can be researched more in depth # It also helps with debugging so each stage of the script can be researched more in depth
# 1) Remove carriage returns # 1) Remove carriage returns
# 2) Convert all characters to lowercase # 2) Convert all characters to lowercase
# 3) Remove lines containing "#" or "/" # 3) Remove comments (text starting with "#", include possible spaces before the hash sign)
# 4) Remove leading tabs, spaces, etc. # 4) Remove lines containing "/"
# 5) Delete lines not matching domain names # 5) Remove leading tabs, spaces, etc.
# 6) Delete lines not matching domain names
< "${source}" tr -d '\r' | \ < "${source}" tr -d '\r' | \
tr '[:upper:]' '[:lower:]' | \ tr '[:upper:]' '[:lower:]' | \
sed -r '/(\/|#).*$/d' | \ sed 's/\s*#.*//g' | \
sed -r '/(\/).*$/d' | \
sed -r 's/^.*\s+//g' | \ sed -r 's/^.*\s+//g' | \
sed -r '/([^\.]+\.)+[^\.]{2,}/!d' > "${destination}" sed -r '/([^\.]+\.)+[^\.]{2,}/!d' > "${destination}"
chmod 644 "${destination}" chmod 644 "${destination}"
return 0 return 0
fi #fi
# Individual file parsing: Keep comments, while parsing domains from each line # Individual file parsing: Keep comments, while parsing domains from each line
# We keep comments to respect the list maintainer's licensing # We keep comments to respect the list maintainer's licensing
@@ -432,48 +594,7 @@ gravity_ParseFileIntoDomains() {
# Determine how to parse individual source file formats # Determine how to parse individual source file formats
if [[ "${firstLine,,}" =~ (adblock|ublock|^!) ]]; then if [[ "${firstLine,,}" =~ (adblock|ublock|^!) ]]; then
# Compare $firstLine against lower case words found in Adblock lists # Compare $firstLine against lower case words found in Adblock lists
echo -ne " ${INFO} Format: Adblock" echo -e " ${CROSS} Format: Adblock (list type not supported)"
# Define symbols used as comments: [!
# "||.*^" includes the "Example 2" domains we can extract
# https://adblockplus.org/filter-cheatsheet
abpFilter="/^(\\[|!)|^(\\|\\|.*\\^)/"
# Parse Adblock lists by extracting "Example 2" domains
# Logic: Ignore lines which do not include comments or domain name anchor
awk ''"${abpFilter}"' {
# Remove valid adblock type options
gsub(/\$?~?(important|third-party|popup|subdocument|websocket),?/, "", $0)
# Remove starting domain name anchor "||" and ending seperator "^"
gsub(/^(\|\|)|(\^)/, "", $0)
# Remove invalid characters (*/,=$)
if($0 ~ /[*\/,=\$]/) { $0="" }
# Remove lines which are only IPv4 addresses
if($0 ~ /^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/) { $0="" }
if($0) { print $0 }
}' "${source}" > "${destination}"
chmod 644 "${destination}"
# Determine if there are Adblock exception rules
# https://adblockplus.org/filters
if grep -q "^@@||" "${source}" &> /dev/null; then
# Parse Adblock lists by extracting exception rules
# Logic: Ignore lines which do not include exception format "@@||example.com^"
awk -F "[|^]" '/^@@\|\|.*\^/ {
# Remove valid adblock type options
gsub(/\$?~?(third-party)/, "", $0)
# Remove invalid characters (*/,=$)
if($0 ~ /[*\/,=\$]/) { $0="" }
if($3) { print $3 }
}' "${source}" > "${destination}.exceptionsFile.tmp"
# Remove exceptions
comm -23 "${destination}" <(sort "${destination}.exceptionsFile.tmp") > "${source}"
mv "${source}" "${destination}"
chmod 644 "${destination}"
fi
echo -e "${OVER} ${TICK} Format: Adblock"
elif grep -q "^address=/" "${source}" &> /dev/null; then elif grep -q "^address=/" "${source}" &> /dev/null; then
# Parse Dnsmasq format lists # Parse Dnsmasq format lists
echo -e " ${CROSS} Format: Dnsmasq (list type not supported)" echo -e " ${CROSS} Format: Dnsmasq (list type not supported)"
@@ -510,79 +631,29 @@ gravity_ParseFileIntoDomains() {
fi fi
} }
# Create (unfiltered) "Matter and Light" consolidated list
gravity_ConsolidateDownloadedBlocklists() {
local str lastLine
str="Consolidating blocklists"
echo -ne " ${INFO} ${str}..."
# Empty $matterAndLight if it already exists, otherwise, create it
: > "${piholeDir}/${matterAndLight}"
chmod 644 "${piholeDir}/${matterAndLight}"
# Loop through each *.domains file
for i in "${activeDomains[@]}"; do
# Determine if file has read permissions, as download might have failed
if [[ -r "${i}" ]]; then
# Remove windows CRs from file, convert list to lower case, and append into $matterAndLight
tr -d '\r' < "${i}" | tr '[:upper:]' '[:lower:]' >> "${piholeDir}/${matterAndLight}"
# Ensure that the first line of a new list is on a new line
lastLine=$(tail -1 "${piholeDir}/${matterAndLight}")
if [[ "${#lastLine}" -gt 0 ]]; then
echo "" >> "${piholeDir}/${matterAndLight}"
fi
fi
done
echo -e "${OVER} ${TICK} ${str}"
}
# Parse consolidated list into (filtered, unique) domains-only format
gravity_SortAndFilterConsolidatedList() {
local str num
str="Extracting domains from blocklists"
echo -ne " ${INFO} ${str}..."
# Parse into file
gravity_ParseFileIntoDomains "${piholeDir}/${matterAndLight}" "${piholeDir}/${parsedMatter}"
# Format $parsedMatter line total as currency
num=$(printf "%'.0f" "$(wc -l < "${piholeDir}/${parsedMatter}")")
echo -e "${OVER} ${TICK} ${str}"
echo -e " ${INFO} Gravity pulled in ${COL_BLUE}${num}${COL_NC} domains"
str="Removing duplicate domains"
echo -ne " ${INFO} ${str}..."
sort -u "${piholeDir}/${parsedMatter}" > "${piholeDir}/${preEventHorizon}"
chmod 644 "${piholeDir}/${preEventHorizon}"
echo -e "${OVER} ${TICK} ${str}"
# Format $preEventHorizon line total as currency
num=$(printf "%'.0f" "$(wc -l < "${piholeDir}/${preEventHorizon}")")
str="Storing ${COL_BLUE}${num}${COL_NC} unique blocking domains in database"
echo -ne " ${INFO} ${str}..."
database_table_from_file "gravity" "${piholeDir}/${preEventHorizon}"
echo -e "${OVER} ${TICK} ${str}"
}
# Report number of entries in a table # Report number of entries in a table
gravity_Table_Count() { gravity_Table_Count() {
local table="${1}" local table="${1}"
local str="${2}" local str="${2}"
local num local num
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table} WHERE enabled = 1;")" num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")"
echo -e " ${INFO} Number of ${str}: ${num}" if [[ "${table}" == "vw_gravity" ]]; then
local unique
unique="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")"
echo -e " ${INFO} Number of ${str}: ${num} (${COL_BOLD}${unique} unique domains${COL_NC})"
sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});"
else
echo -e " ${INFO} Number of ${str}: ${num}"
fi
} }
# Output count of blacklisted domains and regex filters # Output count of blacklisted domains and regex filters
gravity_ShowCount() { gravity_ShowCount() {
gravity_Table_Count "blacklist" "blacklisted domains" gravity_Table_Count "vw_gravity" "gravity domains" ""
gravity_Table_Count "whitelist" "whitelisted domains" gravity_Table_Count "vw_blacklist" "exact blacklisted domains"
gravity_Table_Count "regex" "regex filters" gravity_Table_Count "vw_regex_blacklist" "regex blacklist filters"
gravity_Table_Count "vw_whitelist" "exact whitelisted domains"
gravity_Table_Count "vw_regex_whitelist" "regex whitelist filters"
} }
# Parse list of domains into hosts format # Parse list of domains into hosts format
@@ -648,7 +719,7 @@ gravity_Cleanup() {
# Ensure this function only runs when gravity_SetDownloadOptions() has completed # Ensure this function only runs when gravity_SetDownloadOptions() has completed
if [[ "${gravity_Blackbody:-}" == true ]]; then if [[ "${gravity_Blackbody:-}" == true ]]; then
# Remove any unused .domains files # Remove any unused .domains files
for file in ${piholeDir}/*.${domainsExtension}; do for file in "${piholeDir}"/*."${domainsExtension}"; do
# If list is not in active array, then remove it # If list is not in active array, then remove it
if [[ ! "${activeDomains[*]}" == *"${file}"* ]]; then if [[ ! "${activeDomains[*]}" == *"${file}"* ]]; then
rm -f "${file}" 2> /dev/null || \ rm -f "${file}" 2> /dev/null || \
@@ -701,6 +772,7 @@ for var in "$@"; do
case "${var}" in case "${var}" in
"-f" | "--force" ) forceDelete=true;; "-f" | "--force" ) forceDelete=true;;
"-o" | "--optimize" ) optimize_database=true;; "-o" | "--optimize" ) optimize_database=true;;
"-r" | "--recreate" ) recreate_database=true;;
"-h" | "--help" ) helpFunc;; "-h" | "--help" ) helpFunc;;
esac esac
done done
@@ -708,6 +780,16 @@ done
# Trap Ctrl-C # Trap Ctrl-C
gravity_Trap gravity_Trap
if [[ "${recreate_database:-}" == true ]]; then
str="Restoring from migration backup"
echo -ne "${INFO} ${str}..."
rm "${gravityDBfile}"
pushd "${piholeDir}" > /dev/null || exit
cp migration_backup/* .
popd > /dev/null || exit
echo -e "${OVER} ${TICK} ${str}"
fi
# Move possibly existing legacy files to the gravity database # Move possibly existing legacy files to the gravity database
migrate_to_database migrate_to_database
@@ -721,22 +803,30 @@ fi
# Gravity downloads blocklists next # Gravity downloads blocklists next
gravity_CheckDNSResolutionAvailable gravity_CheckDNSResolutionAvailable
if gravity_GetBlocklistUrls; then gravity_DownloadBlocklists
gravity_SetDownloadOptions
# Build preEventHorizon
gravity_ConsolidateDownloadedBlocklists
gravity_SortAndFilterConsolidatedList
fi
# Create local.list # Create local.list
gravity_generateLocalList gravity_generateLocalList
gravity_ShowCount
gravity_Cleanup # Migrate rest of the data from old to new database
echo "" gravity_swap_databases
# Update gravity timestamp
update_gravity_timestamp
# Ensure proper permissions are set for the database
chown pihole:pihole "${gravityDBfile}"
chmod g+w "${piholeDir}" "${gravityDBfile}"
# Compute numbers to be displayed
gravity_ShowCount
# Determine if DNS has been restarted by this instance of gravity # Determine if DNS has been restarted by this instance of gravity
if [[ -z "${dnsWasOffline:-}" ]]; then if [[ -z "${dnsWasOffline:-}" ]]; then
"${PIHOLE_COMMAND}" restartdns reload "${PIHOLE_COMMAND}" restartdns reload
fi fi
gravity_Cleanup
echo ""
"${PIHOLE_COMMAND}" status "${PIHOLE_COMMAND}" status

View File

@@ -1,4 +1,4 @@
.TH "Pi-hole" "8" "Pi-hole" "Pi-hole" "May 2018" .TH "Pi-hole" "8" "Pi-hole" "Pi-hole" "April 2020"
.SH "NAME" .SH "NAME"
Pi-hole : A black-hole for internet advertisements Pi-hole : A black-hole for internet advertisements
@@ -11,8 +11,6 @@ Pi-hole : A black-hole for internet advertisements
.br .br
\fBpihole -a\fR (\fB-c|-f|-k\fR) \fBpihole -a\fR (\fB-c|-f|-k\fR)
.br .br
\fBpihole -a\fR [\fB-r\fR hostrecord]
.br
\fBpihole -a -e\fR email \fBpihole -a -e\fR email
.br .br
\fBpihole -a -i\fR interface \fBpihole -a -i\fR interface
@@ -43,7 +41,7 @@ pihole -g\fR
.br .br
pihole status pihole status
.br .br
pihole restartdns\fR pihole restartdns\fR [options]
.br .br
\fBpihole\fR (\fBenable\fR|\fBdisable\fR [time]) \fBpihole\fR (\fBenable\fR|\fBdisable\fR [time])
.br .br
@@ -66,14 +64,24 @@ Available commands and options:
Adds or removes specified domain or domains to the blacklist Adds or removes specified domain or domains to the blacklist
.br .br
\fB--regex, regex\fR [options] [<regex1> <regex2 ...>]
.br
Add or removes specified regex filter to the regex blacklist
.br
\fB--white-regex\fR [options] [<regex1> <regex2 ...>]
.br
Add or removes specified regex filter to the regex whitelist
.br
\fB--wild, wildcard\fR [options] [<domain1> <domain2 ...>] \fB--wild, wildcard\fR [options] [<domain1> <domain2 ...>]
.br .br
Add or removes specified domain to the wildcard blacklist Add or removes specified domain to the wildcard blacklist
.br .br
\fB--regex, regex\fR [options] [<regex1> <regex2 ...>] \fB--white-wild\fR [options] [<domain1> <domain2 ...>]
.br .br
Add or removes specified regex filter to the regex blacklist Add or removes specified domain to the wildcard whitelist
.br .br
(Whitelist/Blacklist manipulation options): (Whitelist/Blacklist manipulation options):
@@ -124,9 +132,6 @@ Available commands and options:
-f, fahrenheit Set Fahrenheit as preferred temperature unit -f, fahrenheit Set Fahrenheit as preferred temperature unit
.br .br
-k, kelvin Set Kelvin as preferred temperature unit -k, kelvin Set Kelvin as preferred temperature unit
.br
-r, hostrecord Add a name to the DNS associated to an
IPv4/IPv6 address
.br .br
-e, email Set an administrative contact address for the -e, email Set an administrative contact address for the
Block Page Block Page
@@ -250,9 +255,16 @@ Available commands and options:
#m Disable Pi-hole functionality for # minute(s) #m Disable Pi-hole functionality for # minute(s)
.br .br
\fBrestartdns\fR \fBrestartdns\fR [options]
.br .br
Restart Pi-hole subsystems Full restart Pi-hole subsystems
.br
(restart options):
.br
reload Updates the lists and flushes DNS cache
.br
reload-lists Updates the lists WITHOUT flushing the DNS cache
.br .br
\fBcheckout\fR [repo] [branch] \fBcheckout\fR [repo] [branch]

59
pihole
View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# Pi-hole: A black hole for Internet advertisements # Pi-hole: A black hole for Internet advertisements
# (c) 2017 Pi-hole, LLC (https://pi-hole.net) # (c) 2017 Pi-hole, LLC (https://pi-hole.net)
@@ -11,10 +11,11 @@
readonly PI_HOLE_SCRIPT_DIR="/opt/pihole" readonly PI_HOLE_SCRIPT_DIR="/opt/pihole"
# setupVars is not readonly here because in some functions (checkout), # setupVars and PI_HOLE_BIN_DIR are not readonly here because in some functions (checkout),
# it might get set again when the installer is sourced. This causes an # they might get set again when the installer is sourced. This causes an
# error due to modifying a readonly variable. # error due to modifying a readonly variable.
setupVars="/etc/pihole/setupVars.conf" setupVars="/etc/pihole/setupVars.conf"
PI_HOLE_BIN_DIR="/usr/local/bin"
readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE" readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE"
source "${colfile}" source "${colfile}"
@@ -101,24 +102,28 @@ versionFunc() {
restartDNS() { restartDNS() {
local svcOption svc str output status local svcOption svc str output status
svcOption="${1:-}" svcOption="${1:-restart}"
# Determine if we should reload or restart restart # Determine if we should reload or restart
if [[ "${svcOption}" =~ "reload" ]]; then if [[ "${svcOption}" =~ "reload-lists" ]]; then
# Using SIGHUP will NOT re-read any *.conf files # Reloading of the lists has been requested
# Note: This will NOT re-read any *.conf files
# Note 2: We cannot use killall here as it does
# not know about real-time signals
svc="kill -SIGRTMIN $(pidof ${resolver})"
str="Reloading DNS lists"
elif [[ "${svcOption}" =~ "reload" ]]; then
# Reloading of the DNS cache has been requested
# Note: This will NOT re-read any *.conf files
svc="killall -s SIGHUP ${resolver}" svc="killall -s SIGHUP ${resolver}"
str="Flushing DNS cache"
else else
# Get PID of resolver to determine if it needs to start or restart # A full restart has been requested
if pidof pihole-FTL &> /dev/null; then svc="service ${resolver} restart"
svcOption="restart" str="Restarting DNS server"
else
svcOption="start"
fi
svc="service ${resolver} ${svcOption}"
fi fi
# Print output to Terminal, but not to Web Admin # Print output to Terminal, but not to Web Admin
str="${svcOption^}ing DNS service"
[[ -t 1 ]] && echo -ne " ${INFO} ${str}..." [[ -t 1 ]] && echo -ne " ${INFO} ${str}..."
output=$( { ${svc}; } 2>&1 ) output=$( { ${svc}; } 2>&1 )
@@ -159,7 +164,7 @@ Time:
local str="Disabling blocking for ${tt} seconds" local str="Disabling blocking for ${tt} seconds"
echo -e " ${INFO} ${str}..." echo -e " ${INFO} ${str}..."
local str="Blocking will be re-enabled in ${tt} seconds" local str="Blocking will be re-enabled in ${tt} seconds"
nohup bash -c "sleep ${tt}; pihole enable" </dev/null &>/dev/null & nohup bash -c "sleep ${tt}; ${PI_HOLE_BIN_DIR}/pihole enable" </dev/null &>/dev/null &
else else
local error=true local error=true
fi fi
@@ -170,7 +175,7 @@ Time:
echo -e " ${INFO} ${str}..." echo -e " ${INFO} ${str}..."
local str="Blocking will be re-enabled in ${tt} minutes" local str="Blocking will be re-enabled in ${tt} minutes"
tt=$((${tt}*60)) tt=$((${tt}*60))
nohup bash -c "sleep ${tt}; pihole enable" </dev/null &>/dev/null & nohup bash -c "sleep ${tt}; ${PI_HOLE_BIN_DIR}/pihole enable" </dev/null &>/dev/null &
else else
local error=true local error=true
fi fi
@@ -226,7 +231,7 @@ Options:
sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf
if [[ "${2}" != "noflush" ]]; then if [[ "${2}" != "noflush" ]]; then
# Flush logs # Flush logs
pihole -f "${PI_HOLE_BIN_DIR}"/pihole -f
fi fi
echo -e " ${INFO} Disabling logging..." echo -e " ${INFO} Disabling logging..."
local str="Logging has been disabled!" local str="Logging has been disabled!"
@@ -279,7 +284,7 @@ statusFunc() {
*) echo -e " ${INFO} Pi-hole blocking will be enabled";; *) echo -e " ${INFO} Pi-hole blocking will be enabled";;
esac esac
# Enable blocking # Enable blocking
pihole enable "${PI_HOLE_BIN_DIR}"/pihole enable
fi fi
} }
@@ -301,8 +306,8 @@ tailFunc() {
# Colour A/AAAA/DHCP strings as white # Colour A/AAAA/DHCP strings as white
# Colour everything else as gray # Colour everything else as gray
tail -f /var/log/pihole.log | sed -E \ tail -f /var/log/pihole.log | sed -E \
-e "s,($(date +'%b %d ')| dnsmasq[.*[0-9]]),,g" \ -e "s,($(date +'%b %d ')| dnsmasq\[[0-9]*\]),,g" \
-e "s,(.*(gravity |black |regex | config ).* is (0.0.0.0|::|NXDOMAIN|${IPV4_ADDRESS%/*}|${IPV6_ADDRESS:-NULL}).*),${COL_RED}&${COL_NC}," \ -e "s,(.*(blacklisted |gravity blocked ).* is (0.0.0.0|::|NXDOMAIN|${IPV4_ADDRESS%/*}|${IPV6_ADDRESS:-NULL}).*),${COL_RED}&${COL_NC}," \
-e "s,.*(query\\[A|DHCP).*,${COL_NC}&${COL_NC}," \ -e "s,.*(query\\[A|DHCP).*,${COL_NC}&${COL_NC}," \
-e "s,.*,${COL_GRAY}&${COL_NC}," -e "s,.*,${COL_GRAY}&${COL_NC},"
exit 0 exit 0
@@ -375,8 +380,10 @@ Add '-h' after specific commands for more information on usage
Whitelist/Blacklist Options: Whitelist/Blacklist Options:
-w, whitelist Whitelist domain(s) -w, whitelist Whitelist domain(s)
-b, blacklist Blacklist domain(s) -b, blacklist Blacklist domain(s)
--wild, wildcard Wildcard blacklist domain(s) --regex, regex Regex blacklist domains(s)
--regex, regex Regex blacklist domains(s) --white-regex Regex whitelist domains(s)
--wild, wildcard Wildcard blacklist domain(s)
--white-wild Wildcard whitelist domain(s)
Add '-h' for more info on whitelist/blacklist usage Add '-h' for more info on whitelist/blacklist usage
Debugging Options: Debugging Options:
@@ -406,7 +413,9 @@ Options:
enable Enable Pi-hole subsystems enable Enable Pi-hole subsystems
disable Disable Pi-hole subsystems disable Disable Pi-hole subsystems
Add '-h' for more info on disable usage Add '-h' for more info on disable usage
restartdns Restart Pi-hole subsystems restartdns Full restart Pi-hole subsystems
Add 'reload' to update the lists and flush the cache without restarting the DNS server
Add 'reload-lists' to only update the lists WITHOUT flushing the cache or restarting the DNS server
checkout Switch Pi-hole subsystems to a different Github branch checkout Switch Pi-hole subsystems to a different Github branch
Add '-h' for more info on checkout usage Add '-h' for more info on checkout usage
arpflush Flush information stored in Pi-hole's network tables"; arpflush Flush information stored in Pi-hole's network tables";
@@ -438,6 +447,8 @@ case "${1}" in
"-b" | "blacklist" ) listFunc "$@";; "-b" | "blacklist" ) listFunc "$@";;
"--wild" | "wildcard" ) listFunc "$@";; "--wild" | "wildcard" ) listFunc "$@";;
"--regex" | "regex" ) listFunc "$@";; "--regex" | "regex" ) listFunc "$@";;
"--white-regex" | "white-regex" ) listFunc "$@";;
"--white-wild" | "white-wild" ) listFunc "$@";;
"-d" | "debug" ) debugFunc "$@";; "-d" | "debug" ) debugFunc "$@";;
"-f" | "flush" ) flushFunc "$@";; "-f" | "flush" ) flushFunc "$@";;
"-up" | "updatePihole" ) updatePiholeFunc "$@";; "-up" | "updatePihole" ) updatePiholeFunc "$@";;

View File

@@ -92,235 +92,16 @@ def test_setupVars_saved_to_file(Pihole):
assert "{}={}".format(k, v) in output assert "{}={}".format(k, v) in output
def test_configureFirewall_firewalld_running_no_errors(Pihole): def test_selinux_not_detected(Pihole):
''' '''
confirms firewalld rules are applied when firewallD is running confirms installer continues when SELinux configuration file does not exist
''' '''
# firewallD returns 'running' as status
mock_command('firewall-cmd', {'*': ('running', 0)}, Pihole)
# Whiptail dialog returns Ok for user prompt
mock_command('whiptail', {'*': ('', 0)}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'Configuring FirewallD for httpd and pihole-FTL'
assert expected_stdout in configureFirewall.stdout
firewall_calls = Pihole.run('cat /var/log/firewall-cmd').stdout
assert 'firewall-cmd --state' in firewall_calls
assert ('firewall-cmd '
'--permanent '
'--add-service=http '
'--add-service=dns') in firewall_calls
assert 'firewall-cmd --reload' in firewall_calls
def test_configureFirewall_firewalld_disabled_no_errors(Pihole):
'''
confirms firewalld rules are not applied when firewallD is not running
'''
# firewallD returns non-running status
mock_command('firewall-cmd', {'*': ('not running', '1')}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = ('No active firewall detected.. '
'skipping firewall configuration')
assert expected_stdout in configureFirewall.stdout
def test_configureFirewall_firewalld_enabled_declined_no_errors(Pihole):
'''
confirms firewalld rules are not applied when firewallD is running, user
declines ruleset
'''
# firewallD returns running status
mock_command('firewall-cmd', {'*': ('running', 0)}, Pihole)
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', 1)}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'Not installing firewall rulesets.'
assert expected_stdout in configureFirewall.stdout
def test_configureFirewall_no_firewall(Pihole):
''' confirms firewall skipped no daemon is running '''
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'No active firewall detected'
assert expected_stdout in configureFirewall.stdout
def test_configureFirewall_IPTables_enabled_declined_no_errors(Pihole):
'''
confirms IPTables rules are not applied when IPTables is running, user
declines ruleset
'''
# iptables command exists
mock_command('iptables', {'*': ('', '0')}, Pihole)
# modinfo returns always true (ip_tables module check)
mock_command('modinfo', {'*': ('', '0')}, Pihole)
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'Not installing firewall rulesets.'
assert expected_stdout in configureFirewall.stdout
def test_configureFirewall_IPTables_enabled_rules_exist_no_errors(Pihole):
'''
confirms IPTables rules are not applied when IPTables is running and rules
exist
'''
# iptables command exists and returns 0 on calls
# (should return 0 on iptables -C)
mock_command('iptables', {'-S': ('-P INPUT DENY', '0')}, Pihole)
# modinfo returns always true (ip_tables module check)
mock_command('modinfo', {'*': ('', '0')}, Pihole)
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'Installing new IPTables firewall rulesets'
assert expected_stdout in configureFirewall.stdout
firewall_calls = Pihole.run('cat /var/log/iptables').stdout
# General call type occurances
assert len(re.findall(r'iptables -S', firewall_calls)) == 1
assert len(re.findall(r'iptables -C', firewall_calls)) == 4
assert len(re.findall(r'iptables -I', firewall_calls)) == 0
# Specific port call occurances
assert len(re.findall(r'tcp --dport 80', firewall_calls)) == 1
assert len(re.findall(r'tcp --dport 53', firewall_calls)) == 1
assert len(re.findall(r'udp --dport 53', firewall_calls)) == 1
assert len(re.findall(r'tcp --dport 4711:4720', firewall_calls)) == 1
def test_configureFirewall_IPTables_enabled_not_exist_no_errors(Pihole):
'''
confirms IPTables rules are applied when IPTables is running and rules do
not exist
'''
# iptables command and returns 0 on calls (should return 1 on iptables -C)
mock_command(
'iptables',
{
'-S': (
'-P INPUT DENY',
'0'
),
'-C': (
'',
1
),
'-I': (
'',
0
)
},
Pihole
)
# modinfo returns always true (ip_tables module check)
mock_command('modinfo', {'*': ('', '0')}, Pihole)
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole)
configureFirewall = Pihole.run('''
source /opt/pihole/basic-install.sh
configureFirewall
''')
expected_stdout = 'Installing new IPTables firewall rulesets'
assert expected_stdout in configureFirewall.stdout
firewall_calls = Pihole.run('cat /var/log/iptables').stdout
# General call type occurances
assert len(re.findall(r'iptables -S', firewall_calls)) == 1
assert len(re.findall(r'iptables -C', firewall_calls)) == 4
assert len(re.findall(r'iptables -I', firewall_calls)) == 4
# Specific port call occurances
assert len(re.findall(r'tcp --dport 80', firewall_calls)) == 2
assert len(re.findall(r'tcp --dport 53', firewall_calls)) == 2
assert len(re.findall(r'udp --dport 53', firewall_calls)) == 2
assert len(re.findall(r'tcp --dport 4711:4720', firewall_calls)) == 2
def test_selinux_enforcing_default_exit(Pihole):
'''
confirms installer prompts to exit when SELinux is Enforcing by default
'''
# getenforce returns the running state of SELinux
mock_command('getenforce', {'*': ('Enforcing', '0')}, Pihole)
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, Pihole)
check_selinux = Pihole.run(''' check_selinux = Pihole.run('''
rm -f /etc/selinux/config
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
checkSelinux checkSelinux
''') ''')
expected_stdout = info_box + ' SELinux mode detected: Enforcing' expected_stdout = info_box + ' SELinux not detected'
assert expected_stdout in check_selinux.stdout
expected_stdout = 'SELinux Enforcing detected, exiting installer'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 1
def test_selinux_enforcing_continue(Pihole):
'''
confirms installer prompts to continue with custom policy warning
'''
# getenforce returns the running state of SELinux
mock_command('getenforce', {'*': ('Enforcing', '0')}, Pihole)
# Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = info_box + ' SELinux mode detected: Enforcing'
assert expected_stdout in check_selinux.stdout
expected_stdout = info_box + (' Continuing installation with SELinux '
'Enforcing')
assert expected_stdout in check_selinux.stdout
expected_stdout = info_box + (' Please refer to official SELinux '
'documentation to create a custom policy')
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
def test_selinux_permissive(Pihole):
'''
confirms installer continues when SELinux is Permissive
'''
# getenforce returns the running state of SELinux
mock_command('getenforce', {'*': ('Permissive', '0')}, Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = info_box + ' SELinux mode detected: Permissive'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
def test_selinux_disabled(Pihole):
'''
confirms installer continues when SELinux is Disabled
'''
mock_command('getenforce', {'*': ('Disabled', '0')}, Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = info_box + ' SELinux mode detected: Disabled'
assert expected_stdout in check_selinux.stdout assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0 assert check_selinux.rc == 0
@@ -338,7 +119,7 @@ def test_installPiholeWeb_fresh_install_no_errors(Pihole):
expected_stdout = tick_box + (' Creating directory for blocking page, ' expected_stdout = tick_box + (' Creating directory for blocking page, '
'and copying files') 'and copying files')
assert expected_stdout in installWeb.stdout assert expected_stdout in installWeb.stdout
expected_stdout = cross_box + ' Backing up index.lighttpd.html' expected_stdout = info_box + ' Backing up index.lighttpd.html'
assert expected_stdout in installWeb.stdout assert expected_stdout in installWeb.stdout
expected_stdout = ('No default index.lighttpd.html file found... ' expected_stdout = ('No default index.lighttpd.html file found... '
'not backing up') 'not backing up')
@@ -399,7 +180,10 @@ def test_FTL_detect_aarch64_no_errors(Pihole):
detectPlatform = Pihole.run(''' detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''') ''')
expected_stdout = info_box + ' FTL Checks...' expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
@@ -420,7 +204,10 @@ def test_FTL_detect_armv6l_no_errors(Pihole):
detectPlatform = Pihole.run(''' detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''') ''')
expected_stdout = info_box + ' FTL Checks...' expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
@@ -442,7 +229,10 @@ def test_FTL_detect_armv7l_no_errors(Pihole):
detectPlatform = Pihole.run(''' detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''') ''')
expected_stdout = info_box + ' FTL Checks...' expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
@@ -459,7 +249,10 @@ def test_FTL_detect_x86_64_no_errors(Pihole):
detectPlatform = Pihole.run(''' detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''') ''')
expected_stdout = info_box + ' FTL Checks...' expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
@@ -476,7 +269,10 @@ def test_FTL_detect_unknown_no_errors(Pihole):
detectPlatform = Pihole.run(''' detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''') ''')
expected_stdout = 'Not able to detect architecture (unknown: mips)' expected_stdout = 'Not able to detect architecture (unknown: mips)'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
@@ -495,64 +291,14 @@ def test_FTL_download_aarch64_no_errors(Pihole):
''') ''')
download_binary = Pihole.run(''' download_binary = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
binary="pihole-FTL-aarch64-linux-gnu"
create_pihole_user create_pihole_user
FTLinstall FTLinstall "pihole-FTL-aarch64-linux-gnu"
''') ''')
expected_stdout = tick_box + ' Downloading and Installing FTL' expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in download_binary.stdout assert expected_stdout in download_binary.stdout
assert 'error' not in download_binary.stdout.lower() assert 'error' not in download_binary.stdout.lower()
def test_FTL_download_unknown_fails_no_errors(Pihole):
'''
confirms unknown binary is not downloaded for FTL engine
'''
# mock whiptail answers and ensure installer dependencies
mock_command('whiptail', {'*': ('', '0')}, Pihole)
Pihole.run('''
source /opt/pihole/basic-install.sh
distro_check
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
download_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
binary="pihole-FTL-mips"
create_pihole_user
FTLinstall
''')
expected_stdout = cross_box + ' Downloading and Installing FTL'
assert expected_stdout in download_binary.stdout
error1 = 'Error: URL https://github.com/pi-hole/FTL/releases/download/'
assert error1 in download_binary.stdout
error2 = 'not found'
assert error2 in download_binary.stdout
def test_FTL_download_binary_unset_no_errors(Pihole):
'''
confirms unset binary variable does not download FTL engine
'''
# mock whiptail answers and ensure installer dependencies
mock_command('whiptail', {'*': ('', '0')}, Pihole)
Pihole.run('''
source /opt/pihole/basic-install.sh
distro_check
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
download_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
FTLinstall
''')
expected_stdout = cross_box + ' Downloading and Installing FTL'
assert expected_stdout in download_binary.stdout
error1 = 'Error: URL https://github.com/pi-hole/FTL/releases/download/'
assert error1 in download_binary.stdout
error2 = 'not found'
assert error2 in download_binary.stdout
def test_FTL_binary_installed_and_responsive_no_errors(Pihole): def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
''' '''
confirms FTL binary is copied and functional in installed location confirms FTL binary is copied and functional in installed location
@@ -560,7 +306,10 @@ def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
installed_binary = Pihole.run(''' installed_binary = Pihole.run('''
source /opt/pihole/basic-install.sh source /opt/pihole/basic-install.sh
create_pihole_user create_pihole_user
FTLdetect funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
pihole-FTL version pihole-FTL version
''') ''')
expected_stdout = 'v' expected_stdout = 'v'
@@ -700,3 +449,42 @@ def test_IPv6_ULA_GUA_test(Pihole):
''') ''')
expected_stdout = 'Found IPv6 ULA address, using it for blocking IPv6 ads' expected_stdout = 'Found IPv6 ULA address, using it for blocking IPv6 ads'
assert expected_stdout in detectPlatform.stdout assert expected_stdout in detectPlatform.stdout
def test_validate_ip_valid(Pihole):
'''
Given a valid IP address, valid_ip returns success
'''
output = Pihole.run('''
source /opt/pihole/basic-install.sh
valid_ip "192.168.1.1"
''')
assert output.rc == 0
def test_validate_ip_invalid_octet(Pihole):
'''
Given an invalid IP address (large octet), valid_ip returns an error
'''
output = Pihole.run('''
source /opt/pihole/basic-install.sh
valid_ip "1092.168.1.1"
''')
assert output.rc == 1
def test_validate_ip_invalid_letters(Pihole):
'''
Given an invalid IP address (contains letters), valid_ip returns an error
'''
output = Pihole.run('''
source /opt/pihole/basic-install.sh
valid_ip "not an IP"
''')
assert output.rc == 1

View File

@@ -8,6 +8,69 @@ from conftest import (
) )
def mock_selinux_config(state, Pihole):
'''
Creates a mock SELinux config file with expected content
'''
# validate state string
valid_states = ['enforcing', 'permissive', 'disabled']
assert state in valid_states
# getenforce returns the running state of SELinux
mock_command('getenforce', {'*': (state.capitalize(), '0')}, Pihole)
# create mock configuration with desired content
Pihole.run('''
mkdir /etc/selinux
echo "SELINUX={state}" > /etc/selinux/config
'''.format(state=state.lower()))
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
def test_selinux_enforcing_exit(Pihole):
'''
confirms installer prompts to exit when SELinux is Enforcing by default
'''
mock_selinux_config("enforcing", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = cross_box + ' Current SELinux: Enforcing'
assert expected_stdout in check_selinux.stdout
expected_stdout = 'SELinux Enforcing detected, exiting installer'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 1
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
def test_selinux_permissive(Pihole):
'''
confirms installer continues when SELinux is Permissive
'''
mock_selinux_config("permissive", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = tick_box + ' Current SELinux: Permissive'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
def test_selinux_disabled(Pihole):
'''
confirms installer continues when SELinux is Disabled
'''
mock_selinux_config("disabled", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = tick_box + ' Current SELinux: Disabled'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
@pytest.mark.parametrize("tag", [('fedora'), ]) @pytest.mark.parametrize("tag", [('fedora'), ])
def test_epel_and_remi_not_installed_fedora(Pihole): def test_epel_and_remi_not_installed_fedora(Pihole):
''' '''