Compare commits
3 Commits
tweak/move
...
fix/simpli
Author | SHA1 | Date | |
---|---|---|---|
|
eaaa0c1f7f | ||
|
cea9205136 | ||
|
0cc1e88608 |
@@ -1,4 +1,4 @@
|
|||||||
# EditorConfig is awesome: https://editorconfig.org/
|
# EditorConfig is awesome: http://EditorConfig.org
|
||||||
|
|
||||||
# top-most EditorConfig file
|
# top-most EditorConfig file
|
||||||
root = true
|
root = true
|
||||||
|
37
.github/ISSUE_TEMPLATE.md
vendored
Normal file
37
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
**In raising this issue, I confirm the following:** `{please fill the checkboxes, e.g: [X]}`
|
||||||
|
|
||||||
|
- [] I have read and understood the [contributors guide](https://github.com/pi-hole/pi-hole/blob/master/CONTRIBUTING.md).
|
||||||
|
- [] The issue I am reporting can be *replicated*.
|
||||||
|
- [] The issue I am reporting isn't a duplicate (see [FAQs](https://github.com/pi-hole/pi-hole/wiki/FAQs), [closed issues](https://github.com/pi-hole/pi-hole/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), and [open issues](https://github.com/pi-hole/pi-hole/issues)).
|
||||||
|
|
||||||
|
**How familiar are you with the the source code relevant to this issue?:**
|
||||||
|
|
||||||
|
`{Replace this with a number from 1 to 10. 1 being not familiar, and 10 being very familiar}`
|
||||||
|
|
||||||
|
---
|
||||||
|
**Expected behaviour:**
|
||||||
|
|
||||||
|
`{A detailed description of what you expect to see}`
|
||||||
|
|
||||||
|
**Actual behaviour:**
|
||||||
|
|
||||||
|
`{A detailed description and/or screenshots of what you do see}`
|
||||||
|
|
||||||
|
**Steps to reproduce:**
|
||||||
|
|
||||||
|
`{Detailed steps of how we can reproduce this}`
|
||||||
|
|
||||||
|
**Debug token provided by [uploading `pihole -d` log](https://discourse.pi-hole.net/t/the-pihole-command-with-examples/738#debug):**
|
||||||
|
|
||||||
|
`{Alphanumeric token}`
|
||||||
|
|
||||||
|
**Troubleshooting undertaken, and/or other relevant information:**
|
||||||
|
|
||||||
|
`{Steps of what you have done to fix this}`
|
||||||
|
|
||||||
|
> * `{Please delete this quoted section when opening your issue}`
|
||||||
|
> * You must follow the template instructions. Failure to do so will result in your issue being closed.
|
||||||
|
> * Please [submit any feature requests here](https://discourse.pi-hole.net/c/feature-requests), so it is votable and trackable by the community.
|
||||||
|
> * Please respect that Pi-hole is developed by volunteers, who can only reply in their spare time.
|
||||||
|
> * Detail helps us understand and resolve an issue quicker, but please ensure it's relevant.
|
||||||
|
> * _This template was created based on the work of [`udemy-dl`](https://github.com/nishad/udemy-dl/blob/master/LICENSE)._
|
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
**By submitting this pull request, I confirm the following:**
|
||||||
|
*please fill any appropriate checkboxes, e.g: [X]*
|
||||||
|
|
||||||
|
- [ ] I have read and understood the [contributors guide](https://github.com/pi-hole/pi-hole/blob/master/CONTRIBUTING.md), as well as this entire template.
|
||||||
|
- [ ] I have made only one major change in my proposed changes.
|
||||||
|
- [ ] I have commented my proposed changes within the code.
|
||||||
|
- [ ] I have tested my proposed changes, and have included unit tests where possible.
|
||||||
|
- [ ] I am willing to help maintain this change if there are issues with it later.
|
||||||
|
- [ ] I give this submission freely and claim no ownership.
|
||||||
|
- [ ] It is compatible with the [EUPL 1.2 license](https://opensource.org/licenses/EUPL-1.1)
|
||||||
|
- [ ] I have squashed any insignificant commits. ([`git rebase`](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html))
|
||||||
|
|
||||||
|
Please make sure you [Sign Off](https://github.com/pi-hole/pi-hole/wiki/How-to-signoff-your-commits.) all commits. Pi-hole enforces the [DCO](https://github.com/pi-hole/pi-hole/wiki/Contributing-to-the-project).
|
||||||
|
|
||||||
|
---
|
||||||
|
**What does this PR aim to accomplish?:**
|
||||||
|
*A detailed description, screenshots (if necessary), as well as links to any relevant GitHub issues*
|
||||||
|
|
||||||
|
|
||||||
|
**How does this PR accomplish the above?:**
|
||||||
|
*A detailed description (such as a changelog) and screenshots (if necessary) of the implemented fix*
|
||||||
|
|
||||||
|
|
||||||
|
**What documentation changes (if any) are needed to support this PR?:**
|
||||||
|
*A detailed list of any necessary changes*
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
* You must follow the template instructions. Failure to do so will result in your pull request being closed.
|
||||||
|
* Please respect that Pi-hole is developed by volunteers, who can only reply in their spare time.
|
||||||
|
|
2
.gitignore
vendored
2
.gitignore
vendored
@@ -15,7 +15,7 @@ __pycache__
|
|||||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
||||||
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
|
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
|
||||||
|
|
||||||
# All idea files, with exceptions
|
# All idea files, with execptions
|
||||||
.idea
|
.idea
|
||||||
!.idea/codeStyles/*
|
!.idea/codeStyles/*
|
||||||
!.idea/codeStyleSettings.xml
|
!.idea/codeStyleSettings.xml
|
||||||
|
@@ -2,4 +2,5 @@ linters:
|
|||||||
shellcheck:
|
shellcheck:
|
||||||
shell: bash
|
shell: bash
|
||||||
phpcs:
|
phpcs:
|
||||||
|
csslint:
|
||||||
flake8:
|
flake8:
|
||||||
|
17
.travis.yml
17
.travis.yml
@@ -1,5 +1,12 @@
|
|||||||
import:
|
sudo: required
|
||||||
- source: pi-hole/.github:/build-configs/core.yml@main
|
services:
|
||||||
if: branch = master
|
- docker
|
||||||
- source: pi-hole/.github:/build-configs/core.yml@latest
|
language: python
|
||||||
if: branch != master
|
python:
|
||||||
|
- "2.7"
|
||||||
|
install:
|
||||||
|
- pip install -r requirements.txt
|
||||||
|
|
||||||
|
script:
|
||||||
|
# tox.ini handles setup, ordering of docker build first, and then run tests
|
||||||
|
- tox
|
||||||
|
196
README.md
196
README.md
@@ -1,22 +1,14 @@
|
|||||||
<!-- markdownlint-configure-file { "MD004": { "style": "consistent" } } -->
|
|
||||||
<!-- markdownlint-disable MD033 -->
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<a href="https://pi-hole.net/">
|
<a href="https://pi-hole.net"><img src="https://pi-hole.github.io/graphics/Vortex/Vortex_with_text.png" width="150" height="255" alt="Pi-hole"></a><br/>
|
||||||
<img src="https://pi-hole.github.io/graphics/Vortex/Vortex_with_Wordmark.svg" width="150" height="260" alt="Pi-hole">
|
<b>Network-wide ad blocking via your own Linux hardware</b><br/>
|
||||||
</a>
|
|
||||||
<br>
|
|
||||||
<strong>Network-wide ad blocking via your own Linux hardware</strong>
|
|
||||||
</p>
|
</p>
|
||||||
<!-- markdownlint-enable MD033 -->
|
|
||||||
|
|
||||||
#
|
The Pi-hole[®](https://pi-hole.net/trademark-rules-and-brand-guidelines/) is a [DNS sinkhole](https://en.wikipedia.org/wiki/DNS_Sinkhole) that protects your devices from unwanted content, without installing any client-side software.
|
||||||
|
|
||||||
The Pi-hole® is a [DNS sinkhole](https://en.wikipedia.org/wiki/DNS_Sinkhole) that protects your devices from unwanted content, without installing any client-side software.
|
- **Easy-to-install**: our versatile installer walks you through the process, and [takes less than ten minutes](https://www.youtube.com/watch?v=vKWjx1AQYgs)
|
||||||
|
|
||||||
- **Easy-to-install**: our versatile installer walks you through the process, and takes less than ten minutes
|
|
||||||
- **Resolute**: content is blocked in _non-browser locations_, such as ad-laden mobile apps and smart TVs
|
- **Resolute**: content is blocked in _non-browser locations_, such as ad-laden mobile apps and smart TVs
|
||||||
- **Responsive**: seamlessly speeds up the feel of everyday browsing by caching DNS queries
|
- **Responsive**: seamlessly speeds up the feel of everyday browsing by caching DNS queries
|
||||||
- **Lightweight**: runs smoothly with [minimal hardware and software requirements](https://docs.pi-hole.net/main/prerequisites/)
|
- **Lightweight**: runs smoothly with [minimal hardware and software requirements](https://discourse.pi-hole.net/t/hardware-software-requirements/273)
|
||||||
- **Robust**: a command line interface that is quality assured for interoperability
|
- **Robust**: a command line interface that is quality assured for interoperability
|
||||||
- **Insightful**: a beautiful responsive Web Interface dashboard to view and control your Pi-hole
|
- **Insightful**: a beautiful responsive Web Interface dashboard to view and control your Pi-hole
|
||||||
- **Versatile**: can optionally function as a [DHCP server](https://discourse.pi-hole.net/t/how-do-i-use-pi-holes-built-in-dhcp-server-and-why-would-i-want-to/3026), ensuring *all* your devices are protected automatically
|
- **Versatile**: can optionally function as a [DHCP server](https://discourse.pi-hole.net/t/how-do-i-use-pi-holes-built-in-dhcp-server-and-why-would-i-want-to/3026), ensuring *all* your devices are protected automatically
|
||||||
@@ -25,35 +17,32 @@ The Pi-hole® is a [DNS sinkhole](https://en.wikipedia.org/wiki/DNS_Sinkhole) th
|
|||||||
- **Free**: open source software which helps ensure _you_ are the sole person in control of your privacy
|
- **Free**: open source software which helps ensure _you_ are the sole person in control of your privacy
|
||||||
|
|
||||||
-----
|
-----
|
||||||
|
[](https://www.codacy.com/app/Pi-hole/pi-hole?utm_source=github.com&utm_medium=referral&utm_content=pi-hole/pi-hole&utm_campaign=Badge_Grade)
|
||||||
Master [](https://travis-ci.com/pi-hole/pi-hole) Development [](https://travis-ci.com/pi-hole/pi-hole)
|
[](https://travis-ci.org/pi-hole/pi-hole)
|
||||||
|
[](https://www.bountysource.com/trackers/3011939-pi-hole-pi-hole?utm_source=3011939&utm_medium=shield&utm_campaign=TRACKER_BADGE)
|
||||||
|
|
||||||
## One-Step Automated Install
|
## One-Step Automated Install
|
||||||
|
|
||||||
Those who want to get started quickly and conveniently may install Pi-hole using the following command:
|
Those who want to get started quickly and conveniently may install Pi-hole using the following command:
|
||||||
|
|
||||||
### `curl -sSL https://install.pi-hole.net | bash`
|
#### `curl -sSL https://install.pi-hole.net | bash`
|
||||||
|
|
||||||
## Alternative Install Methods
|
## Alternative Install Methods
|
||||||
|
[Piping to `bash` is controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash), as it prevents you from [reading code that is about to run](https://github.com/pi-hole/pi-hole/blob/master/automated%20install/basic-install.sh) on your system. Therefore, we provide these alternative installation methods which allow code review before installation:
|
||||||
Piping to `bash` is [controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash), as it prevents you from [reading code that is about to run](https://github.com/pi-hole/pi-hole/blob/master/automated%20install/basic-install.sh) on your system. Therefore, we provide these alternative installation methods which allow code review before installation:
|
|
||||||
|
|
||||||
### Method 1: Clone our repository and run
|
### Method 1: Clone our repository and run
|
||||||
|
```
|
||||||
```bash
|
|
||||||
git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole
|
git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole
|
||||||
cd "Pi-hole/automated install/"
|
cd "Pi-hole/automated install/"
|
||||||
sudo bash basic-install.sh
|
sudo bash basic-install.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
### Method 2: Manually download the installer and run
|
### Method 2: Manually download the installer and run
|
||||||
|
```
|
||||||
```bash
|
|
||||||
wget -O basic-install.sh https://install.pi-hole.net
|
wget -O basic-install.sh https://install.pi-hole.net
|
||||||
sudo bash basic-install.sh
|
sudo bash basic-install.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
## [Post-install: Make your network take advantage of Pi-hole](https://docs.pi-hole.net/main/post-install/)
|
## Post-install: Make your network take advantage of Pi-hole
|
||||||
|
|
||||||
Once the installer has been run, you will need to [configure your router to have **DHCP clients use Pi-hole as their DNS server**](https://discourse.pi-hole.net/t/how-do-i-configure-my-devices-to-use-pi-hole-as-their-dns-server/245) which ensures that all devices connecting to your network will have content blocked without any further intervention.
|
Once the installer has been run, you will need to [configure your router to have **DHCP clients use Pi-hole as their DNS server**](https://discourse.pi-hole.net/t/how-do-i-configure-my-devices-to-use-pi-hole-as-their-dns-server/245) which ensures that all devices connecting to your network will have content blocked without any further intervention.
|
||||||
|
|
||||||
@@ -64,102 +53,161 @@ As a last resort, you can always manually set each device to use Pi-hole as thei
|
|||||||
-----
|
-----
|
||||||
|
|
||||||
## Pi-hole is free, but powered by your support
|
## Pi-hole is free, but powered by your support
|
||||||
|
|
||||||
There are many reoccurring costs involved with maintaining free, open source, and privacy-respecting software; expenses which [our volunteer developers](https://github.com/orgs/pi-hole/people) pitch in to cover out-of-pocket. This is just one example of how strongly we feel about our software, as well as the importance of keeping it maintained.
|
There are many reoccurring costs involved with maintaining free, open source, and privacy-respecting software; expenses which [our volunteer developers](https://github.com/orgs/pi-hole/people) pitch in to cover out-of-pocket. This is just one example of how strongly we feel about our software, as well as the importance of keeping it maintained.
|
||||||
|
|
||||||
Make no mistake: **your support is absolutely vital to help keep us innovating!**
|
Make no mistake: **your support is absolutely vital to help keep us innovating!**
|
||||||
|
|
||||||
### [Donations](https://pi-hole.net/donate)
|
### Donations
|
||||||
|
Sending a donation using our links below is **extremely helpful** in offsetting a portion of our monthly expenses:
|
||||||
|
|
||||||
Sending a donation using our Sponsor Button is **extremely helpful** in offsetting a portion of our monthly expenses:
|
- <img src="https://pi-hole.github.io/graphics/Badges/paypal-badge-black.svg" width="24" height="24" alt="PP"/> <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=3J2L3Z4DHW9UY">Donate via PayPal</a><br/>
|
||||||
|
- <img src="https://pi-hole.github.io/graphics/Badges/bitcoin-badge-black.svg" width="24" height="24" alt="BTC"/> [Bitcoin, Bitcoin Cash, Ethereum, Litecoin](https://commerce.coinbase.com/checkout/dd304d04-f324-4a77-931b-0db61c77a41b)
|
||||||
|
|
||||||
### Alternative support
|
### Alternative support
|
||||||
|
If you'd rather not [donate](https://pi-hole.net/donate/) (_which is okay!_), there are other ways you can help support us:
|
||||||
If you'd rather not donate (_which is okay!_), there are other ways you can help support us:
|
|
||||||
|
|
||||||
- [Patreon](https://patreon.com/pihole) _Become a patron for rewards_
|
- [Patreon](https://patreon.com/pihole) _Become a patron for rewards_
|
||||||
- [Digital Ocean](https://www.digitalocean.com/?refcode=344d234950e1) _affiliate link_
|
- [Digital Ocean](http://www.digitalocean.com/?refcode=344d234950e1) _affiliate link_
|
||||||
- [Stickermule](https://www.stickermule.com/unlock?ref_id=9127301701&utm_medium=link&utm_source=invite) _earn a $10 credit after your first purchase_
|
- [Stickermule](https://www.stickermule.com/unlock?ref_id=9127301701&utm_medium=link&utm_source=invite) _earn a $10 credit after your first purchase_
|
||||||
|
- [Pi-hole Swag Store](https://pi-hole.net/shop/) _affiliate link_
|
||||||
- [Amazon](http://www.amazon.com/exec/obidos/redirect-home/pihole09-20) _affiliate link_
|
- [Amazon](http://www.amazon.com/exec/obidos/redirect-home/pihole09-20) _affiliate link_
|
||||||
|
- [DNS Made Easy](https://cp.dnsmadeeasy.com/u/133706) _affiliate link_
|
||||||
|
- [Vultr](http://www.vultr.com/?ref=7190426) _affiliate link_
|
||||||
- Spreading the word about our software, and how you have benefited from it
|
- Spreading the word about our software, and how you have benefited from it
|
||||||
|
|
||||||
### Contributing via GitHub
|
### Contributing via GitHub
|
||||||
|
|
||||||
We welcome _everyone_ to contribute to issue reports, suggest new features, and create pull requests.
|
We welcome _everyone_ to contribute to issue reports, suggest new features, and create pull requests.
|
||||||
|
|
||||||
If you have something to add - anything from a typo through to a whole new feature, we're happy to check it out! Just make sure to fill out our template when submitting your request; the questions that it asks will help the volunteers quickly understand what you're aiming to achieve.
|
If you have something to add - anything from a typo through to a whole new feature, we're happy to check it out! Just make sure to fill out our template when submitting your request; the questions that it asks will help the volunteers quickly understand what you're aiming to achieve.
|
||||||
|
|
||||||
You'll find that the [install script](https://github.com/pi-hole/pi-hole/blob/master/automated%20install/basic-install.sh) and the [debug script](https://github.com/pi-hole/pi-hole/blob/master/advanced/Scripts/piholeDebug.sh) have an abundance of comments, which will help you better understand how Pi-hole works. They're also a valuable resource to those who want to learn how to write scripts or code a program! We encourage anyone who likes to tinker to read through it and submit a pull request for us to review.
|
You'll find that the [install script](https://github.com/pi-hole/pi-hole/blob/master/automated%20install/basic-install.sh) and the [debug script](https://github.com/pi-hole/pi-hole/blob/master/advanced/Scripts/piholeDebug.sh) have an abundance of comments, which will help you better understand how Pi-hole works. They're also a valuable resource to those who want to learn how to write scripts or code a program! We encourage anyone who likes to tinker to read through it and submit a pull request for us to review.
|
||||||
|
|
||||||
|
### Presentations about Pi-hole
|
||||||
|
Word-of-mouth continues to help our project grow immensely, and so we are helping make this easier for people.
|
||||||
|
|
||||||
|
If you are going to be presenting Pi-hole at a conference, meetup or even a school project, [get in touch with us](https://pi-hole.net/2017/05/17/giving-a-presentation-on-pi-hole-contact-us-first-for-some-goodies-and-support/) so we can hook you up with free swag to hand out to your audience!
|
||||||
|
|
||||||
-----
|
-----
|
||||||
|
|
||||||
## Getting in touch with us
|
## Getting in touch with us
|
||||||
|
While we are primarily reachable on our <a href="https://discourse.pi-hole.net/">Discourse User Forum</a>, we can also be found on a variety of social media outlets. **Please be sure to check the FAQ's** before starting a new discussion, as we do not have the spare time to reply to every request for assistance.
|
||||||
|
|
||||||
While we are primarily reachable on our [Discourse User Forum](https://discourse.pi-hole.net/), we can also be found on a variety of social media outlets. **Please be sure to check the FAQ's** before starting a new discussion, as we do not have the spare time to reply to every request for assistance.
|
<ul>
|
||||||
|
<li><a href="https://discourse.pi-hole.net/c/faqs">Frequently Asked Questions</a></li>
|
||||||
- [Frequently Asked Questions](https://discourse.pi-hole.net/c/faqs)
|
<li><a href="https://github.com/pi-hole/pi-hole/wiki">Pi-hole Wiki</a></li>
|
||||||
- [Feature Requests](https://discourse.pi-hole.net/c/feature-requests?order=votes)
|
<li><a href="https://discourse.pi-hole.net/c/feature-requests?order=votes">Feature Requests</a></li>
|
||||||
- [Reddit](https://www.reddit.com/r/pihole/)
|
<li><a href="https://discourse.pi-hole.net/">Discourse User Forum</a></li>
|
||||||
- [Twitter](https://twitter.com/The_Pi_hole)
|
<li><a href="https://www.reddit.com/r/pihole/">Reddit</a></li>
|
||||||
|
<li><a href="https://gitter.im/pi-hole/pi-hole">Gitter</a> (Real-time chat)</li>
|
||||||
|
<li><a href="https://twitter.com/The_Pi_Hole">Twitter</a></li>
|
||||||
|
<li><a href="https://www.youtube.com/channel/UCT5kq9w0wSjogzJb81C9U0w">YouTube</a></li>
|
||||||
|
<li><a href="https://www.facebook.com/ThePiHole/">Facebook</a></li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
-----
|
-----
|
||||||
|
|
||||||
## Breakdown of Features
|
## Breakdown of Features
|
||||||
|
|
||||||
### The Command Line Interface
|
### The Command Line Interface
|
||||||
|
The `pihole` command has all the functionality necessary to be able to fully administer the Pi-hole, without the need of the Web Interface. It's fast, user-friendly, and auditable by anyone with an understanding of `bash`.
|
||||||
|
|
||||||
The [pihole](https://docs.pi-hole.net/core/pihole-command/) command has all the functionality necessary to be able to fully administer the Pi-hole, without the need of the Web Interface. It's fast, user-friendly, and auditable by anyone with an understanding of `bash`.
|
<a href="https://pi-hole.github.io/graphics/Screenshots/blacklist-cli.gif"><img src="https://pi-hole.github.io/graphics/Screenshots/blacklist-cli.gif" alt="Pi-hole Blacklist Demo"/></a>
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Some notable features include:
|
Some notable features include:
|
||||||
|
* [Whitelisting, Blacklisting and Wildcards](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#whitelisting-blacklisting-and-wildcards)
|
||||||
|
* [Debugging utility](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#debugger)
|
||||||
|
* [Viewing the live log file](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#tail)
|
||||||
|
* [Real-time Statistics via `ssh`](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#chronometer) or [your TFT LCD screen](http://www.amazon.com/exec/obidos/ASIN/B00ID39LM4/pihole09-20)
|
||||||
|
* [Updating Ad Lists](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#gravity)
|
||||||
|
* [Querying Ad Lists for blocked domains](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#query)
|
||||||
|
* [Enabling and Disabling Pi-hole](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown#enable--disable)
|
||||||
|
* ... and *many* more!
|
||||||
|
|
||||||
- [Whitelisting, Blacklisting and Regex](https://docs.pi-hole.net/core/pihole-command/#whitelisting-blacklisting-and-regex)
|
You can read our [Core Feature Breakdown](https://github.com/pi-hole/pi-hole/wiki/Core-Function-Breakdown), as well as read up on [example usage](https://discourse.pi-hole.net/t/the-pihole-command-with-examples/738) for more information.
|
||||||
- [Debugging utility](https://docs.pi-hole.net/core/pihole-command/#debugger)
|
|
||||||
- [Viewing the live log file](https://docs.pi-hole.net/core/pihole-command/#tail)
|
|
||||||
- [Updating Ad Lists](https://docs.pi-hole.net/core/pihole-command/#gravity)
|
|
||||||
- [Querying Ad Lists for blocked domains](https://docs.pi-hole.net/core/pihole-command/#query)
|
|
||||||
- [Enabling and Disabling Pi-hole](https://docs.pi-hole.net/core/pihole-command/#enable-disable)
|
|
||||||
- ... and *many* more!
|
|
||||||
|
|
||||||
You can read our [Core Feature Breakdown](https://docs.pi-hole.net/core/pihole-command/#pi-hole-core) for more information.
|
|
||||||
|
|
||||||
### The Web Interface Dashboard
|
### The Web Interface Dashboard
|
||||||
|
|
||||||
This [optional dashboard](https://github.com/pi-hole/AdminLTE) allows you to view stats, change settings, and configure your Pi-hole. It's the power of the Command Line Interface, with none of the learning curve!
|
This [optional dashboard](https://github.com/pi-hole/AdminLTE) allows you to view stats, change settings, and configure your Pi-hole. It's the power of the Command Line Interface, with none of the learning curve!
|
||||||
|
|
||||||

|
<img src="https://pi-hole.github.io/graphics/Screenshots/pihole-dashboard.png" alt="Pi-hole Dashboard"/></a>
|
||||||
|
|
||||||
Some notable features include:
|
Some notable features include:
|
||||||
|
* Mobile friendly interface
|
||||||
- Mobile friendly interface
|
* Password protection
|
||||||
- Password protection
|
* Detailed graphs and doughnut charts
|
||||||
- Detailed graphs and doughnut charts
|
* Top lists of domains and clients
|
||||||
- Top lists of domains and clients
|
* A filterable and sortable query log
|
||||||
- A filterable and sortable query log
|
* Long Term Statistics to view data over user-defined time ranges
|
||||||
- Long Term Statistics to view data over user-defined time ranges
|
* The ability to easily manage and configure Pi-hole features
|
||||||
- The ability to easily manage and configure Pi-hole features
|
* ... and all the main features of the Command Line Interface!
|
||||||
- ... and all the main features of the Command Line Interface!
|
|
||||||
|
|
||||||
There are several ways to [access the dashboard](https://discourse.pi-hole.net/t/how-do-i-access-pi-holes-dashboard-admin-interface/3168):
|
There are several ways to [access the dashboard](https://discourse.pi-hole.net/t/how-do-i-access-pi-holes-dashboard-admin-interface/3168):
|
||||||
|
|
||||||
1. `http://pi.hole/admin/` (when using Pi-hole as your DNS server)
|
1. `http://<IP_ADDPRESS_OF_YOUR_PI_HOLE>/admin/`
|
||||||
2. `http://<IP_ADDPRESS_OF_YOUR_PI_HOLE>/admin/`
|
2. `http://pi.hole/admin/` (when using Pi-hole as your DNS server)
|
||||||
3. `http://pi.hole/` (when using Pi-hole as your DNS server)
|
3. `http://pi.hole/` (when using Pi-hole as your DNS server)
|
||||||
|
|
||||||
## Faster-than-light Engine
|
## Faster-than-light Engine
|
||||||
|
|
||||||
FTLDNS is a lightweight, purpose-built daemon used to provide statistics needed for the Web Interface, and its API can be easily integrated into your own projects. As the name implies, FTLDNS does this all *very quickly*!
|
FTLDNS is a lightweight, purpose-built daemon used to provide statistics needed for the Web Interface, and its API can be easily integrated into your own projects. As the name implies, FTLDNS does this all *very quickly*!
|
||||||
|
|
||||||
Some of the statistics you can integrate include:
|
Some of the statistics you can integrate include:
|
||||||
|
* Total number of domains being blocked
|
||||||
|
* Total number of DNS queries today
|
||||||
|
* Total number of ads blocked today
|
||||||
|
* Percentage of ads blocked
|
||||||
|
* Unique domains
|
||||||
|
* Queries forwarded (to your chosen upstream DNS server)
|
||||||
|
* Queries cached
|
||||||
|
* Unique clients
|
||||||
|
|
||||||
- Total number of domains being blocked
|
The API can be accessed via [`telnet`](https://github.com/pi-hole/FTL), the Web (`admin/api.php`) and Command Line (`pihole -c -j`). You can out find [more details over here](https://discourse.pi-hole.net/t/pi-hole-api/1863).
|
||||||
- Total number of DNS queries today
|
|
||||||
- Total number of ads blocked today
|
|
||||||
- Percentage of ads blocked
|
|
||||||
- Unique domains
|
|
||||||
- Queries forwarded (to your chosen upstream DNS server)
|
|
||||||
- Queries cached
|
|
||||||
- Unique clients
|
|
||||||
|
|
||||||
The API can be accessed via [`telnet`](https://github.com/pi-hole/FTL), the Web (`admin/api.php`) and Command Line (`pihole -c -j`). You can find out [more details over here](https://discourse.pi-hole.net/t/pi-hole-api/1863).
|
-----
|
||||||
|
|
||||||
|
## The Origin Of Pi-hole
|
||||||
|
Pi-hole being an **advertising-aware DNS/Web server**, makes use of the following technologies:
|
||||||
|
|
||||||
|
* [`dnsmasq`](http://www.thekelleys.org.uk/dnsmasq/doc.html) - a lightweight DNS and DHCP server
|
||||||
|
* [`curl`](https://curl.haxx.se) - A command line tool for transferring data with URL syntax
|
||||||
|
* [`lighttpd`](https://www.lighttpd.net) - web server designed and optimized for high performance
|
||||||
|
* [`php`](https://secure.php.net) - a popular general-purpose web scripting language
|
||||||
|
* [AdminLTE Dashboard](https://github.com/almasaeed2010/AdminLTE) - premium admin control panel based on Bootstrap 3.x
|
||||||
|
|
||||||
|
While quite outdated at this point, [this original blog post about Pi-hole](https://jacobsalmela.com/2015/06/16/block-millions-ads-network-wide-with-a-raspberry-pi-hole-2-0/) goes into **great detail** about how Pi-hole was originally set up and how it works. Syntactically, it's no longer accurate, but the same basic principles and logic still apply to Pi-hole's current state.
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
- [Lifehacker: Turn A Raspberry Pi Into An Ad Blocker With A Single Command](https://www.lifehacker.com.au/2015/02/turn-a-raspberry-pi-into-an-ad-blocker-with-a-single-command/) (Feburary, 2015)
|
||||||
|
- [MakeUseOf: Adblock Everywhere: The Raspberry Pi-Hole Way](http://www.makeuseof.com/tag/adblock-everywhere-raspberry-pi-hole-way/) (March, 2015)
|
||||||
|
- [Catchpoint: Ad-Blocking on Apple iOS9: Valuing the End User Experience](http://blog.catchpoint.com/2015/09/14/ad-blocking-apple/) (September, 2015)
|
||||||
|
- [Security Now Netcast: Pi-hole](https://www.youtube.com/watch?v=p7-osq_y8i8&t=100m26s) (October, 2015)
|
||||||
|
- [TekThing: Raspberry Pi-Hole Makes Ads Disappear!](https://youtu.be/8Co59HU2gY0?t=2m) (December, 2015)
|
||||||
|
- [Foolish Tech Show](https://youtu.be/bYyena0I9yc?t=2m4s) (December, 2015)
|
||||||
|
- [Block Ads on All Home Devices for $53.18](https://medium.com/@robleathern/block-ads-on-all-home-devices-for-53-18-a5f1ec139693#.gj1xpgr5d) (December, 2015)
|
||||||
|
- [Pi-Hole for Ubuntu 14.04](http://www.boyter.org/2015/12/pi-hole-ubuntu-14-04/) (December, 2015)
|
||||||
|
- [MacObserver Podcast 585](https://www.macobserver.com/tmo/podcast/macgeekgab-585) (December, 2015)
|
||||||
|
- [The Defrag Show: Endoscope USB Camera, The Final [HoloLens] Vote, Adblock Pi and more](https://channel9.msdn.com/Shows/The-Defrag-Show/Defrag-Endoscope-USB-Camera-The-Final-HoloLens-Vote-Adblock-Pi-and-more?WT.mc_id=dlvr_twitter_ch9#time=20m39s) (January, 2016)
|
||||||
|
- [Adafruit: Pi-hole is a black hole for internet ads](https://blog.adafruit.com/2016/03/04/pi-hole-is-a-black-hole-for-internet-ads-piday-raspberrypi-raspberry_pi/) (March, 2016)
|
||||||
|
- [Digital Trends: 5 Fun, Easy Projects You Can Try With a $35 Raspberry Pi](https://youtu.be/QwrKlyC2kdM?t=1m42s) (March, 2016)
|
||||||
|
- [Adafruit: Raspberry Pi Quick Look at Pi Hole ad blocking server with Tony D](https://www.youtube.com/watch?v=eg4u2j1HYlI) (June, 2016)
|
||||||
|
- [Devacron: OrangePi Zero as an Ad-Block server with Pi-Hole](http://www.devacron.com/orangepi-zero-as-an-ad-block-server-with-pi-hole/) (December, 2016)
|
||||||
|
- [Linux Pro: The Hole Truth](http://www.linuxpromagazine.com/Issues/2017/200/The-sysadmin-s-daily-grind-Pi-hole) (July, 2017)
|
||||||
|
- [Adafruit: installing Pi-hole on a Pi Zero W](https://learn.adafruit.com/pi-hole-ad-blocker-with-pi-zero-w/install-pi-hole) (August, 2017)
|
||||||
|
- [CryptoAUSTRALIA: How We Tried 5 Privacy Focused Raspberry Pi Projects](https://blog.cryptoaustralia.org.au/2017/10/05/5-privacy-focused-raspberry-pi-projects/) (October, 2017)
|
||||||
|
- [CryptoAUSTRALIA: Pi-hole Workshop](https://blog.cryptoaustralia.org.au/2017/11/02/pi-hole-network-wide-ad-blocker/) (November, 2017)
|
||||||
|
- [Know How 355: Killing ads with a Raspberry Pi-Hole!](https://www.twit.tv/shows/know-how/episodes/355) (November, 2017)
|
||||||
|
- [Hobohouse: Block Advertising on your Network with Pi-hole and Raspberry Pi](https://hobo.house/2018/02/27/block-advertising-with-pi-hole-and-raspberry-pi/) (March, 2018)
|
||||||
|
- [Scott Helme: Securing DNS across all of my devices with Pi-Hole + DNS-over-HTTPS + 1.1.1.1](https://scotthelme.co.uk/securing-dns-across-all-of-my-devices-with-pihole-dns-over-https-1-1-1-1/) (April, 2018)
|
||||||
|
- [Scott Helme: Catching and dealing with naughty devices on my home network](https://scotthelme.co.uk/catching-naughty-devices-on-my-home-network/) (April, 2018)
|
||||||
|
- [Bloomberg Business Week: Brotherhood of the Ad blockers](https://www.bloomberg.com/news/features/2018-05-10/inside-the-brotherhood-of-pi-hole-ad-blockers) (May, 2018)
|
||||||
|
- [Software Engineering Daily: Interview with the creator of Pi-hole](https://softwareengineeringdaily.com/2018/05/29/pi-hole-ad-blocker-hardware-with-jacob-salmela/) (May, 2018)
|
||||||
|
- [Raspberry Pi: Block ads at home using Pi-hole and a Raspberry Pi](https://www.raspberrypi.org/blog/pi-hole-raspberry-pi/) (July, 2018)
|
||||||
|
- [Troy Hunt: Mmm... Pi-hole...](https://www.troyhunt.com/mmm-pi-hole/) (September, 2018)
|
||||||
|
- [PEBKAK Podcast: Interview With Jacob Salmela](https://www.jerseystudios.net/2018/10/11/150-pi-hole/) (October, 2018)
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
## Pi-hole Projects
|
||||||
|
- [The Big Blocklist Collection](https://wally3k.github.io)
|
||||||
|
- [Pie in the Sky-Hole](https://dlaa.me/blog/post/skyhole)
|
||||||
|
- [Copernicus: Windows Tray Application](https://github.com/goldbattle/copernicus)
|
||||||
|
- [Magic Mirror with DNS Filtering](https://zonksec.com/blog/magic-mirror-dns-filtering/#dnssoftware)
|
||||||
|
- [Windows DNS Swapper](https://github.com/roots84/DNS-Swapper)
|
||||||
|
@@ -18,8 +18,9 @@
|
|||||||
# WITHIN /etc/dnsmasq.d/yourname.conf #
|
# WITHIN /etc/dnsmasq.d/yourname.conf #
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
|
addn-hosts=/etc/pihole/gravity.list
|
||||||
|
addn-hosts=/etc/pihole/black.list
|
||||||
addn-hosts=/etc/pihole/local.list
|
addn-hosts=/etc/pihole/local.list
|
||||||
addn-hosts=/etc/pihole/custom.list
|
|
||||||
|
|
||||||
domain-needed
|
domain-needed
|
||||||
|
|
||||||
@@ -37,8 +38,13 @@ interface=@INT@
|
|||||||
cache-size=10000
|
cache-size=10000
|
||||||
|
|
||||||
log-queries
|
log-queries
|
||||||
log-facility=/var/log/pihole/pihole.log
|
log-facility=/var/log/pihole.log
|
||||||
|
|
||||||
local-ttl=2
|
local-ttl=2
|
||||||
|
|
||||||
log-async
|
log-async
|
||||||
|
|
||||||
|
# If a DHCP client claims that its name is "wpad", ignore that.
|
||||||
|
# This fixes a security hole. see CERT Vulnerability VU#598349
|
||||||
|
dhcp-name-match=set:wpad-ignore,wpad
|
||||||
|
dhcp-ignore-names=tag:wpad-ignore
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
# Determine if terminal is capable of showing colors
|
# Determine if terminal is capable of showing colours
|
||||||
if [[ -t 1 ]] && [[ $(tput colors) -ge 8 ]]; then
|
if [[ -t 1 ]] && [[ $(tput colors) -ge 8 ]]; then
|
||||||
# Bold and underline may not show up on all clients
|
# Bold and underline may not show up on all clients
|
||||||
# If something MUST be emphasized, use both
|
# If something MUST be emphasised, use both
|
||||||
COL_BOLD='[1m'
|
COL_BOLD='[1m'
|
||||||
COL_ULINE='[4m'
|
COL_ULINE='[4m'
|
||||||
|
|
||||||
|
@@ -13,7 +13,7 @@ LC_NUMERIC=C
|
|||||||
|
|
||||||
# Retrieve stats from FTL engine
|
# Retrieve stats from FTL engine
|
||||||
pihole-FTL() {
|
pihole-FTL() {
|
||||||
ftl_port=$(cat /run/pihole-FTL.port 2> /dev/null)
|
ftl_port=$(cat /var/run/pihole-FTL.port 2> /dev/null)
|
||||||
if [[ -n "$ftl_port" ]]; then
|
if [[ -n "$ftl_port" ]]; then
|
||||||
# Open connection to FTL
|
# Open connection to FTL
|
||||||
exec 3<>"/dev/tcp/127.0.0.1/$ftl_port"
|
exec 3<>"/dev/tcp/127.0.0.1/$ftl_port"
|
||||||
@@ -72,7 +72,7 @@ printFunc() {
|
|||||||
|
|
||||||
# Remove excess characters from main text
|
# Remove excess characters from main text
|
||||||
if [[ "$text_main_len" -gt "$text_main_max_len" ]]; then
|
if [[ "$text_main_len" -gt "$text_main_max_len" ]]; then
|
||||||
# Trim text without colors
|
# Trim text without colours
|
||||||
text_main_trim="${text_main_nocol:0:$text_main_max_len}"
|
text_main_trim="${text_main_nocol:0:$text_main_max_len}"
|
||||||
# Replace with trimmed text
|
# Replace with trimmed text
|
||||||
text_main="${text_main/$text_main_nocol/$text_main_trim}"
|
text_main="${text_main/$text_main_nocol/$text_main_trim}"
|
||||||
@@ -88,7 +88,7 @@ printFunc() {
|
|||||||
|
|
||||||
[[ "$spc_num" -le 0 ]] && spc_num="0"
|
[[ "$spc_num" -le 0 ]] && spc_num="0"
|
||||||
spc=$(printf "%${spc_num}s")
|
spc=$(printf "%${spc_num}s")
|
||||||
#spc="${spc// /.}" # Debug: Visualize spaces
|
#spc="${spc// /.}" # Debug: Visualise spaces
|
||||||
|
|
||||||
printf "%s%s$spc" "$title" "$text_main"
|
printf "%s%s$spc" "$title" "$text_main"
|
||||||
|
|
||||||
@@ -131,7 +131,7 @@ get_init_stats() {
|
|||||||
printf "%s%02d:%02d:%02d\\n" "$days" "$hrs" "$mins" "$secs"
|
printf "%s%02d:%02d:%02d\\n" "$days" "$hrs" "$mins" "$secs"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Set Color Codes
|
# Set Colour Codes
|
||||||
coltable="/opt/pihole/COL_TABLE"
|
coltable="/opt/pihole/COL_TABLE"
|
||||||
if [[ -f "${coltable}" ]]; then
|
if [[ -f "${coltable}" ]]; then
|
||||||
source ${coltable}
|
source ${coltable}
|
||||||
@@ -153,7 +153,7 @@ get_init_stats() {
|
|||||||
|
|
||||||
sys_throttle_raw=$(vgt=$(sudo vcgencmd get_throttled); echo "${vgt##*x}")
|
sys_throttle_raw=$(vgt=$(sudo vcgencmd get_throttled); echo "${vgt##*x}")
|
||||||
|
|
||||||
# Active Throttle Notice: https://bit.ly/2gnunOo
|
# Active Throttle Notice: http://bit.ly/2gnunOo
|
||||||
if [[ "$sys_throttle_raw" != "0" ]]; then
|
if [[ "$sys_throttle_raw" != "0" ]]; then
|
||||||
case "$sys_throttle_raw" in
|
case "$sys_throttle_raw" in
|
||||||
*0001) thr_type="${COL_YELLOW}Under Voltage";;
|
*0001) thr_type="${COL_YELLOW}Under Voltage";;
|
||||||
@@ -236,7 +236,7 @@ get_sys_stats() {
|
|||||||
|
|
||||||
sys_name=$(hostname)
|
sys_name=$(hostname)
|
||||||
|
|
||||||
[[ -n "$TEMPERATUREUNIT" ]] && temp_unit="${TEMPERATUREUNIT^^}" || temp_unit="C"
|
[[ -n "$TEMPERATUREUNIT" ]] && temp_unit="$TEMPERATUREUNIT" || temp_unit="c"
|
||||||
|
|
||||||
# Get storage stats for partition mounted on /
|
# Get storage stats for partition mounted on /
|
||||||
read -r -a disk_raw <<< "$(df -B1 / 2> /dev/null | awk 'END{ print $3,$2,$5 }')"
|
read -r -a disk_raw <<< "$(df -B1 / 2> /dev/null | awk 'END{ print $3,$2,$5 }')"
|
||||||
@@ -269,7 +269,7 @@ get_sys_stats() {
|
|||||||
scr_lines="${scr_size[0]}"
|
scr_lines="${scr_size[0]}"
|
||||||
scr_cols="${scr_size[1]}"
|
scr_cols="${scr_size[1]}"
|
||||||
|
|
||||||
# Determine Chronometer size behavior
|
# Determine Chronometer size behaviour
|
||||||
if [[ "$scr_cols" -ge 58 ]]; then
|
if [[ "$scr_cols" -ge 58 ]]; then
|
||||||
chrono_width="large"
|
chrono_width="large"
|
||||||
elif [[ "$scr_cols" -gt 40 ]]; then
|
elif [[ "$scr_cols" -gt 40 ]]; then
|
||||||
@@ -308,7 +308,7 @@ get_sys_stats() {
|
|||||||
[[ "${cpu_freq}" == *".0"* ]] && cpu_freq="${cpu_freq/.0/}"
|
[[ "${cpu_freq}" == *".0"* ]] && cpu_freq="${cpu_freq/.0/}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Determine color for temperature
|
# Determine colour for temperature
|
||||||
if [[ -n "$temp_file" ]]; then
|
if [[ -n "$temp_file" ]]; then
|
||||||
if [[ "$temp_unit" == "C" ]]; then
|
if [[ "$temp_unit" == "C" ]]; then
|
||||||
cpu_temp=$(printf "%.0fc\\n" "$(calcFunc "$(< $temp_file) / 1000")")
|
cpu_temp=$(printf "%.0fc\\n" "$(calcFunc "$(< $temp_file) / 1000")")
|
||||||
|
@@ -1,113 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# shellcheck disable=SC1090
|
|
||||||
|
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
|
||||||
# (c) 2019 Pi-hole, LLC (https://pi-hole.net)
|
|
||||||
# Network-wide ad blocking via your own hardware.
|
|
||||||
#
|
|
||||||
# Updates gravity.db database
|
|
||||||
#
|
|
||||||
# This file is copyright under the latest version of the EUPL.
|
|
||||||
# Please see LICENSE file for your rights under this license.
|
|
||||||
|
|
||||||
readonly scriptPath="/etc/.pihole/advanced/Scripts/database_migration/gravity"
|
|
||||||
|
|
||||||
upgrade_gravityDB(){
|
|
||||||
local database piholeDir auditFile version
|
|
||||||
database="${1}"
|
|
||||||
piholeDir="${2}"
|
|
||||||
auditFile="${piholeDir}/auditlog.list"
|
|
||||||
|
|
||||||
# Get database version
|
|
||||||
version="$(sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")"
|
|
||||||
|
|
||||||
if [[ "$version" == "1" ]]; then
|
|
||||||
# This migration script upgrades the gravity.db file by
|
|
||||||
# adding the domain_audit table
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 1 to 2"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/1_to_2.sql"
|
|
||||||
version=2
|
|
||||||
|
|
||||||
# Store audit domains in database table
|
|
||||||
if [ -e "${auditFile}" ]; then
|
|
||||||
echo -e " ${INFO} Migrating content of ${auditFile} into new database"
|
|
||||||
# database_table_from_file is defined in gravity.sh
|
|
||||||
database_table_from_file "domain_audit" "${auditFile}"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "2" ]]; then
|
|
||||||
# This migration script upgrades the gravity.db file by
|
|
||||||
# renaming the regex table to regex_blacklist, and
|
|
||||||
# creating a new regex_whitelist table + corresponding linking table and views
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 2 to 3"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/2_to_3.sql"
|
|
||||||
version=3
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "3" ]]; then
|
|
||||||
# This migration script unifies the formally separated domain
|
|
||||||
# lists into a single table with a UNIQUE domain constraint
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 3 to 4"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/3_to_4.sql"
|
|
||||||
version=4
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "4" ]]; then
|
|
||||||
# This migration script upgrades the gravity and list views
|
|
||||||
# implementing necessary changes for per-client blocking
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 4 to 5"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/4_to_5.sql"
|
|
||||||
version=5
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "5" ]]; then
|
|
||||||
# This migration script upgrades the adlist view
|
|
||||||
# to return an ID used in gravity.sh
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 5 to 6"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/5_to_6.sql"
|
|
||||||
version=6
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "6" ]]; then
|
|
||||||
# This migration script adds a special group with ID 0
|
|
||||||
# which is automatically associated to all clients not
|
|
||||||
# having their own group assignments
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 6 to 7"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/6_to_7.sql"
|
|
||||||
version=7
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "7" ]]; then
|
|
||||||
# This migration script recreated the group table
|
|
||||||
# to ensure uniqueness on the group name
|
|
||||||
# We also add date_added and date_modified columns
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 7 to 8"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/7_to_8.sql"
|
|
||||||
version=8
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "8" ]]; then
|
|
||||||
# This migration fixes some issues that were introduced
|
|
||||||
# in the previous migration script.
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 8 to 9"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/8_to_9.sql"
|
|
||||||
version=9
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "9" ]]; then
|
|
||||||
# This migration drops unused tables and creates triggers to remove
|
|
||||||
# obsolete groups assignments when the linked items are deleted
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 9 to 10"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/9_to_10.sql"
|
|
||||||
version=10
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "10" ]]; then
|
|
||||||
# This adds timestamp and an optional comment field to the client table
|
|
||||||
# These fields are only temporary and will be replaces by the columns
|
|
||||||
# defined in gravity.db.sql during gravity swapping. We add them here
|
|
||||||
# to keep the copying process generic (needs the same columns in both the
|
|
||||||
# source and the destination databases).
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 10 to 11"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/10_to_11.sql"
|
|
||||||
version=11
|
|
||||||
fi
|
|
||||||
if [[ "$version" == "11" ]]; then
|
|
||||||
# Rename group 0 from "Unassociated" to "Default"
|
|
||||||
echo -e " ${INFO} Upgrading gravity database from version 11 to 12"
|
|
||||||
sqlite3 "${database}" < "${scriptPath}/11_to_12.sql"
|
|
||||||
version=12
|
|
||||||
fi
|
|
||||||
}
|
|
@@ -1,16 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
ALTER TABLE client ADD COLUMN date_added INTEGER;
|
|
||||||
ALTER TABLE client ADD COLUMN date_modified INTEGER;
|
|
||||||
ALTER TABLE client ADD COLUMN comment TEXT;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_update AFTER UPDATE ON client
|
|
||||||
BEGIN
|
|
||||||
UPDATE client SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 11 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,19 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
UPDATE "group" SET name = 'Default' WHERE id = 0;
|
|
||||||
UPDATE "group" SET description = 'The default group' WHERE id = 0;
|
|
||||||
|
|
||||||
DROP TRIGGER IF EXISTS tr_group_zero;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
|
|
||||||
BEGIN
|
|
||||||
INSERT OR IGNORE INTO "group" (id,enabled,name,description) VALUES (0,1,'Default','The default group');
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 12 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,14 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
CREATE TABLE domain_audit
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT UNIQUE NOT NULL,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int))
|
|
||||||
);
|
|
||||||
|
|
||||||
UPDATE info SET value = 2 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,65 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
ALTER TABLE regex RENAME TO regex_blacklist;
|
|
||||||
|
|
||||||
CREATE TABLE regex_blacklist_by_group
|
|
||||||
(
|
|
||||||
regex_blacklist_id INTEGER NOT NULL REFERENCES regex_blacklist (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (regex_blacklist_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
INSERT INTO regex_blacklist_by_group SELECT * FROM regex_by_group;
|
|
||||||
DROP TABLE regex_by_group;
|
|
||||||
DROP VIEW vw_regex;
|
|
||||||
DROP TRIGGER tr_regex_update;
|
|
||||||
|
|
||||||
CREATE VIEW vw_regex_blacklist AS SELECT DISTINCT domain
|
|
||||||
FROM regex_blacklist
|
|
||||||
LEFT JOIN regex_blacklist_by_group ON regex_blacklist_by_group.regex_blacklist_id = regex_blacklist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = regex_blacklist_by_group.group_id
|
|
||||||
WHERE regex_blacklist.enabled = 1 AND (regex_blacklist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
ORDER BY regex_blacklist.id;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_regex_blacklist_update AFTER UPDATE ON regex_blacklist
|
|
||||||
BEGIN
|
|
||||||
UPDATE regex_blacklist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TABLE regex_whitelist
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT UNIQUE NOT NULL,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
comment TEXT
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE regex_whitelist_by_group
|
|
||||||
(
|
|
||||||
regex_whitelist_id INTEGER NOT NULL REFERENCES regex_whitelist (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (regex_whitelist_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE VIEW vw_regex_whitelist AS SELECT DISTINCT domain
|
|
||||||
FROM regex_whitelist
|
|
||||||
LEFT JOIN regex_whitelist_by_group ON regex_whitelist_by_group.regex_whitelist_id = regex_whitelist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = regex_whitelist_by_group.group_id
|
|
||||||
WHERE regex_whitelist.enabled = 1 AND (regex_whitelist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
ORDER BY regex_whitelist.id;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_regex_whitelist_update AFTER UPDATE ON regex_whitelist
|
|
||||||
BEGIN
|
|
||||||
UPDATE regex_whitelist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
|
||||||
END;
|
|
||||||
|
|
||||||
|
|
||||||
UPDATE info SET value = 3 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,96 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
CREATE TABLE domainlist
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
type INTEGER NOT NULL DEFAULT 0,
|
|
||||||
domain TEXT UNIQUE NOT NULL,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
comment TEXT
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE whitelist ADD COLUMN type INTEGER;
|
|
||||||
UPDATE whitelist SET type = 0;
|
|
||||||
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
|
|
||||||
SELECT type,domain,enabled,date_added,date_modified,comment FROM whitelist;
|
|
||||||
|
|
||||||
ALTER TABLE blacklist ADD COLUMN type INTEGER;
|
|
||||||
UPDATE blacklist SET type = 1;
|
|
||||||
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
|
|
||||||
SELECT type,domain,enabled,date_added,date_modified,comment FROM blacklist;
|
|
||||||
|
|
||||||
ALTER TABLE regex_whitelist ADD COLUMN type INTEGER;
|
|
||||||
UPDATE regex_whitelist SET type = 2;
|
|
||||||
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
|
|
||||||
SELECT type,domain,enabled,date_added,date_modified,comment FROM regex_whitelist;
|
|
||||||
|
|
||||||
ALTER TABLE regex_blacklist ADD COLUMN type INTEGER;
|
|
||||||
UPDATE regex_blacklist SET type = 3;
|
|
||||||
INSERT INTO domainlist (type,domain,enabled,date_added,date_modified,comment)
|
|
||||||
SELECT type,domain,enabled,date_added,date_modified,comment FROM regex_blacklist;
|
|
||||||
|
|
||||||
DROP TABLE whitelist_by_group;
|
|
||||||
DROP TABLE blacklist_by_group;
|
|
||||||
DROP TABLE regex_whitelist_by_group;
|
|
||||||
DROP TABLE regex_blacklist_by_group;
|
|
||||||
CREATE TABLE domainlist_by_group
|
|
||||||
(
|
|
||||||
domainlist_id INTEGER NOT NULL REFERENCES domainlist (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (domainlist_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
DROP TRIGGER tr_whitelist_update;
|
|
||||||
DROP TRIGGER tr_blacklist_update;
|
|
||||||
DROP TRIGGER tr_regex_whitelist_update;
|
|
||||||
DROP TRIGGER tr_regex_blacklist_update;
|
|
||||||
CREATE TRIGGER tr_domainlist_update AFTER UPDATE ON domainlist
|
|
||||||
BEGIN
|
|
||||||
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
|
||||||
END;
|
|
||||||
|
|
||||||
DROP VIEW vw_whitelist;
|
|
||||||
CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 0
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
DROP VIEW vw_blacklist;
|
|
||||||
CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 1
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
DROP VIEW vw_regex_whitelist;
|
|
||||||
CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 2
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
DROP VIEW vw_regex_blacklist;
|
|
||||||
CREATE VIEW vw_regex_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 3
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
UPDATE info SET value = 4 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,38 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
DROP TABLE gravity;
|
|
||||||
CREATE TABLE gravity
|
|
||||||
(
|
|
||||||
domain TEXT NOT NULL,
|
|
||||||
adlist_id INTEGER NOT NULL REFERENCES adlist (id),
|
|
||||||
PRIMARY KEY(domain, adlist_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
DROP VIEW vw_gravity;
|
|
||||||
CREATE VIEW vw_gravity AS SELECT domain, adlist_by_group.group_id AS group_id
|
|
||||||
FROM gravity
|
|
||||||
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = gravity.adlist_id
|
|
||||||
LEFT JOIN adlist ON adlist.id = gravity.adlist_id
|
|
||||||
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
|
|
||||||
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1);
|
|
||||||
|
|
||||||
CREATE TABLE client
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
ip TEXT NOL NULL UNIQUE
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE client_by_group
|
|
||||||
(
|
|
||||||
client_id INTEGER NOT NULL REFERENCES client (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (client_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
UPDATE info SET value = 5 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,18 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
DROP VIEW vw_adlist;
|
|
||||||
CREATE VIEW vw_adlist AS SELECT DISTINCT address, adlist.id AS id
|
|
||||||
FROM adlist
|
|
||||||
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
|
|
||||||
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
ORDER BY adlist.id;
|
|
||||||
|
|
||||||
UPDATE info SET value = 6 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
||||||
|
|
@@ -1,35 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
|
|
||||||
|
|
||||||
INSERT INTO domainlist_by_group (domainlist_id, group_id) SELECT id, 0 FROM domainlist;
|
|
||||||
INSERT INTO client_by_group (client_id, group_id) SELECT id, 0 FROM client;
|
|
||||||
INSERT INTO adlist_by_group (adlist_id, group_id) SELECT id, 0 FROM adlist;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_add AFTER INSERT ON client
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
|
|
||||||
BEGIN
|
|
||||||
INSERT OR REPLACE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 7 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,35 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
ALTER TABLE "group" RENAME TO "group__";
|
|
||||||
|
|
||||||
CREATE TABLE "group"
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
name TEXT UNIQUE NOT NULL,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
description TEXT
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
|
|
||||||
BEGIN
|
|
||||||
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
INSERT OR IGNORE INTO "group" (id,enabled,name,description) SELECT id,enabled,name,description FROM "group__";
|
|
||||||
|
|
||||||
DROP TABLE "group__";
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
|
|
||||||
BEGIN
|
|
||||||
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 8 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,27 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
DROP TRIGGER IF EXISTS tr_group_update;
|
|
||||||
DROP TRIGGER IF EXISTS tr_group_zero;
|
|
||||||
|
|
||||||
PRAGMA legacy_alter_table=ON;
|
|
||||||
ALTER TABLE "group" RENAME TO "group__";
|
|
||||||
PRAGMA legacy_alter_table=OFF;
|
|
||||||
ALTER TABLE "group__" RENAME TO "group";
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
|
|
||||||
BEGIN
|
|
||||||
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
|
|
||||||
BEGIN
|
|
||||||
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Unassociated');
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 9 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,29 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
PRAGMA FOREIGN_KEYS=OFF;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS whitelist;
|
|
||||||
DROP TABLE IF EXISTS blacklist;
|
|
||||||
DROP TABLE IF EXISTS regex_whitelist;
|
|
||||||
DROP TABLE IF EXISTS regex_blacklist;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_delete AFTER DELETE ON domainlist
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM domainlist_by_group WHERE domainlist_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_delete AFTER DELETE ON adlist
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM adlist_by_group WHERE adlist_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_delete AFTER DELETE ON client
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM client_by_group WHERE client_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
UPDATE info SET value = 10 WHERE property = 'version';
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -11,87 +11,69 @@
|
|||||||
# Globals
|
# Globals
|
||||||
basename=pihole
|
basename=pihole
|
||||||
piholeDir=/etc/"${basename}"
|
piholeDir=/etc/"${basename}"
|
||||||
gravityDBfile="${piholeDir}/gravity.db"
|
whitelist="${piholeDir}"/whitelist.txt
|
||||||
|
blacklist="${piholeDir}"/blacklist.txt
|
||||||
|
|
||||||
|
readonly regexlist="/etc/pihole/regex.list"
|
||||||
reload=false
|
reload=false
|
||||||
addmode=true
|
addmode=true
|
||||||
verbose=true
|
verbose=true
|
||||||
wildcard=false
|
wildcard=false
|
||||||
web=false
|
|
||||||
|
|
||||||
domList=()
|
domList=()
|
||||||
|
|
||||||
typeId=""
|
listMain=""
|
||||||
comment=""
|
listAlt=""
|
||||||
declare -i domaincount
|
|
||||||
domaincount=0
|
|
||||||
|
|
||||||
colfile="/opt/pihole/COL_TABLE"
|
colfile="/opt/pihole/COL_TABLE"
|
||||||
source ${colfile}
|
source ${colfile}
|
||||||
|
|
||||||
# IDs are hard-wired to domain interpretation in the gravity database scheme
|
|
||||||
# Clients (including FTL) will read them through the corresponding views
|
|
||||||
readonly whitelist="0"
|
|
||||||
readonly blacklist="1"
|
|
||||||
readonly regex_whitelist="2"
|
|
||||||
readonly regex_blacklist="3"
|
|
||||||
|
|
||||||
GetListnameFromTypeId() {
|
|
||||||
if [[ "$1" == "${whitelist}" ]]; then
|
|
||||||
echo "whitelist"
|
|
||||||
elif [[ "$1" == "${blacklist}" ]]; then
|
|
||||||
echo "blacklist"
|
|
||||||
elif [[ "$1" == "${regex_whitelist}" ]]; then
|
|
||||||
echo "regex whitelist"
|
|
||||||
elif [[ "$1" == "${regex_blacklist}" ]]; then
|
|
||||||
echo "regex blacklist"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
GetListParamFromTypeId() {
|
|
||||||
if [[ "${typeId}" == "${whitelist}" ]]; then
|
|
||||||
echo "w"
|
|
||||||
elif [[ "${typeId}" == "${blacklist}" ]]; then
|
|
||||||
echo "b"
|
|
||||||
elif [[ "${typeId}" == "${regex_whitelist}" && "${wildcard}" == true ]]; then
|
|
||||||
echo "-white-wild"
|
|
||||||
elif [[ "${typeId}" == "${regex_whitelist}" ]]; then
|
|
||||||
echo "-white-regex"
|
|
||||||
elif [[ "${typeId}" == "${regex_blacklist}" && "${wildcard}" == true ]]; then
|
|
||||||
echo "-wild"
|
|
||||||
elif [[ "${typeId}" == "${regex_blacklist}" ]]; then
|
|
||||||
echo "-regex"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
helpFunc() {
|
helpFunc() {
|
||||||
local listname param
|
if [[ "${listMain}" == "${whitelist}" ]]; then
|
||||||
|
param="w"
|
||||||
listname="$(GetListnameFromTypeId "${typeId}")"
|
type="white"
|
||||||
param="$(GetListParamFromTypeId)"
|
elif [[ "${listMain}" == "${regexlist}" && "${wildcard}" == true ]]; then
|
||||||
|
param="-wild"
|
||||||
|
type="wildcard black"
|
||||||
|
elif [[ "${listMain}" == "${regexlist}" ]]; then
|
||||||
|
param="-regex"
|
||||||
|
type="regex black"
|
||||||
|
else
|
||||||
|
param="b"
|
||||||
|
type="black"
|
||||||
|
fi
|
||||||
|
|
||||||
echo "Usage: pihole -${param} [options] <domain> <domain2 ...>
|
echo "Usage: pihole -${param} [options] <domain> <domain2 ...>
|
||||||
Example: 'pihole -${param} site.com', or 'pihole -${param} site1.com site2.com'
|
Example: 'pihole -${param} site.com', or 'pihole -${param} site1.com site2.com'
|
||||||
${listname^} one or more domains
|
${type^}list one or more domains
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
-d, --delmode Remove domain(s) from the ${listname}
|
-d, --delmode Remove domain(s) from the ${type}list
|
||||||
-nr, --noreload Update ${listname} without reloading the DNS server
|
-nr, --noreload Update ${type}list without refreshing dnsmasq
|
||||||
-q, --quiet Make output less verbose
|
-q, --quiet Make output less verbose
|
||||||
-h, --help Show this help dialog
|
-h, --help Show this help dialog
|
||||||
-l, --list Display all your ${listname}listed domains
|
-l, --list Display all your ${type}listed domains
|
||||||
--nuke Removes all entries in a list"
|
--nuke Removes all entries in a list"
|
||||||
|
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
ValidateDomain() {
|
EscapeRegexp() {
|
||||||
|
# This way we may safely insert an arbitrary
|
||||||
|
# string in our regular expressions
|
||||||
|
# This sed is intentionally executed in three steps to ease maintainability
|
||||||
|
# The first sed removes any amount of leading dots
|
||||||
|
echo $* | sed 's/^\.*//' | sed "s/[]\.|$(){}?+*^]/\\\\&/g" | sed "s/\\//\\\\\//g"
|
||||||
|
}
|
||||||
|
|
||||||
|
HandleOther() {
|
||||||
# Convert to lowercase
|
# Convert to lowercase
|
||||||
domain="${1,,}"
|
domain="${1,,}"
|
||||||
|
|
||||||
# Check validity of domain (don't check for regex entries)
|
# Check validity of domain (don't check for regex entries)
|
||||||
if [[ "${#domain}" -le 253 ]]; then
|
if [[ "${#domain}" -le 253 ]]; then
|
||||||
if [[ ( "${typeId}" == "${regex_blacklist}" || "${typeId}" == "${regex_whitelist}" ) && "${wildcard}" == false ]]; then
|
if [[ "${listMain}" == "${regexlist}" && "${wildcard}" == false ]]; then
|
||||||
validDomain="${domain}"
|
validDomain="${domain}"
|
||||||
else
|
else
|
||||||
validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check
|
validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check
|
||||||
@@ -100,182 +82,194 @@ ValidateDomain() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "${validDomain}" ]]; then
|
if [[ -n "${validDomain}" ]]; then
|
||||||
domList=("${domList[@]}" "${validDomain}")
|
domList=("${domList[@]}" ${validDomain})
|
||||||
else
|
else
|
||||||
echo -e " ${CROSS} ${domain} is not a valid argument or domain name!"
|
echo -e " ${CROSS} ${domain} is not a valid argument or domain name!"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
domaincount=$((domaincount+1))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ProcessDomainList() {
|
PoplistFile() {
|
||||||
for dom in "${domList[@]}"; do
|
# Check whitelist file exists, and if not, create it
|
||||||
# Format domain into regex filter if requested
|
if [[ ! -f "${whitelist}" ]]; then
|
||||||
if [[ "${wildcard}" == true ]]; then
|
touch "${whitelist}"
|
||||||
dom="(^|\\.)${dom//\./\\.}$"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Logic: If addmode then add to desired list and remove from the other;
|
# Check blacklist file exists, and if not, create it
|
||||||
# if delmode then remove from desired list but do not add to the other
|
if [[ ! -f "${blacklist}" ]]; then
|
||||||
|
touch "${blacklist}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
for dom in "${domList[@]}"; do
|
||||||
|
# Logic: If addmode then add to desired list and remove from the other; if delmode then remove from desired list but do not add to the other
|
||||||
if ${addmode}; then
|
if ${addmode}; then
|
||||||
AddDomain "${dom}"
|
AddDomain "${dom}" "${listMain}"
|
||||||
|
RemoveDomain "${dom}" "${listAlt}"
|
||||||
else
|
else
|
||||||
RemoveDomain "${dom}"
|
RemoveDomain "${dom}" "${listMain}"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
AddDomain() {
|
AddDomain() {
|
||||||
local domain num requestedListname existingTypeId existingListname
|
list="$2"
|
||||||
domain="$1"
|
domain=$(EscapeRegexp "$1")
|
||||||
|
|
||||||
|
[[ "${list}" == "${whitelist}" ]] && listname="whitelist"
|
||||||
|
[[ "${list}" == "${blacklist}" ]] && listname="blacklist"
|
||||||
|
|
||||||
|
if [[ "${list}" == "${whitelist}" || "${list}" == "${blacklist}" ]]; then
|
||||||
|
[[ "${list}" == "${whitelist}" && -z "${type}" ]] && type="--whitelist-only"
|
||||||
|
[[ "${list}" == "${blacklist}" && -z "${type}" ]] && type="--blacklist-only"
|
||||||
|
bool=true
|
||||||
# Is the domain in the list we want to add it to?
|
# Is the domain in the list we want to add it to?
|
||||||
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")"
|
grep -Ex -q "${domain}" "${list}" > /dev/null 2>&1 || bool=false
|
||||||
requestedListname="$(GetListnameFromTypeId "${typeId}")"
|
|
||||||
|
|
||||||
if [[ "${num}" -ne 0 ]]; then
|
if [[ "${bool}" == false ]]; then
|
||||||
existingTypeId="$(sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")"
|
# Domain not found in the whitelist file, add it!
|
||||||
if [[ "${existingTypeId}" == "${typeId}" ]]; then
|
|
||||||
if [[ "${verbose}" == true ]]; then
|
if [[ "${verbose}" == true ]]; then
|
||||||
echo -e " ${INFO} ${1} already exists in ${requestedListname}, no need to add!"
|
echo -e " ${INFO} Adding ${1} to ${listname}..."
|
||||||
fi
|
|
||||||
else
|
|
||||||
existingListname="$(GetListnameFromTypeId "${existingTypeId}")"
|
|
||||||
sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
|
|
||||||
if [[ "${verbose}" == true ]]; then
|
|
||||||
echo -e " ${INFO} ${1} already exists in ${existingListname}, it has been moved to ${requestedListname}!"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Domain not found in the table, add it!
|
|
||||||
if [[ "${verbose}" == true ]]; then
|
|
||||||
echo -e " ${INFO} Adding ${domain} to the ${requestedListname}..."
|
|
||||||
fi
|
fi
|
||||||
reload=true
|
reload=true
|
||||||
# Insert only the domain here. The enabled and date_added fields will be filled
|
# Add it to the list we want to add it to
|
||||||
# with their default values (enabled = true, date_added = current timestamp)
|
echo "$1" >> "${list}"
|
||||||
if [[ -z "${comment}" ]]; then
|
|
||||||
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});"
|
|
||||||
else
|
else
|
||||||
# also add comment when variable has been set through the "--comment" option
|
if [[ "${verbose}" == true ]]; then
|
||||||
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');"
|
echo -e " ${INFO} ${1} already exists in ${listname}, no need to add!"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
elif [[ "${list}" == "${regexlist}" ]]; then
|
||||||
|
[[ -z "${type}" ]] && type="--wildcard-only"
|
||||||
|
bool=true
|
||||||
|
domain="${1}"
|
||||||
|
|
||||||
|
[[ "${wildcard}" == true ]] && domain="(^|\\.)${domain//\./\\.}$"
|
||||||
|
|
||||||
|
# Is the domain in the list?
|
||||||
|
# Search only for exactly matching lines
|
||||||
|
grep -Fx "${domain}" "${regexlist}" > /dev/null 2>&1 || bool=false
|
||||||
|
|
||||||
|
if [[ "${bool}" == false ]]; then
|
||||||
|
if [[ "${verbose}" == true ]]; then
|
||||||
|
echo -e " ${INFO} Adding ${domain} to regex list..."
|
||||||
|
fi
|
||||||
|
reload="restart"
|
||||||
|
echo "$domain" >> "${regexlist}"
|
||||||
|
else
|
||||||
|
if [[ "${verbose}" == true ]]; then
|
||||||
|
echo -e " ${INFO} ${domain} already exists in regex list, no need to add!"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
RemoveDomain() {
|
RemoveDomain() {
|
||||||
local domain num requestedListname
|
list="$2"
|
||||||
domain="$1"
|
domain=$(EscapeRegexp "$1")
|
||||||
|
|
||||||
# Is the domain in the list we want to remove it from?
|
[[ "${list}" == "${whitelist}" ]] && listname="whitelist"
|
||||||
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")"
|
[[ "${list}" == "${blacklist}" ]] && listname="blacklist"
|
||||||
|
|
||||||
requestedListname="$(GetListnameFromTypeId "${typeId}")"
|
if [[ "${list}" == "${whitelist}" || "${list}" == "${blacklist}" ]]; then
|
||||||
|
bool=true
|
||||||
if [[ "${num}" -eq 0 ]]; then
|
[[ "${list}" == "${whitelist}" && -z "${type}" ]] && type="--whitelist-only"
|
||||||
if [[ "${verbose}" == true ]]; then
|
[[ "${list}" == "${blacklist}" && -z "${type}" ]] && type="--blacklist-only"
|
||||||
echo -e " ${INFO} ${domain} does not exist in ${requestedListname}, no need to remove!"
|
# Is it in the list? Logic follows that if its whitelisted it should not be blacklisted and vice versa
|
||||||
fi
|
grep -Ex -q "${domain}" "${list}" > /dev/null 2>&1 || bool=false
|
||||||
return
|
if [[ "${bool}" == true ]]; then
|
||||||
fi
|
# Remove it from the other one
|
||||||
|
echo -e " ${INFO} Removing $1 from ${listname}..."
|
||||||
# Domain found in the table, remove it!
|
# /I flag: search case-insensitive
|
||||||
if [[ "${verbose}" == true ]]; then
|
sed -i "/${domain}/Id" "${list}"
|
||||||
echo -e " ${INFO} Removing ${domain} from the ${requestedListname}..."
|
|
||||||
fi
|
|
||||||
reload=true
|
reload=true
|
||||||
# Remove it from the current list
|
else
|
||||||
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};"
|
if [[ "${verbose}" == true ]]; then
|
||||||
|
echo -e " ${INFO} ${1} does not exist in ${listname}, no need to remove!"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
elif [[ "${list}" == "${regexlist}" ]]; then
|
||||||
|
[[ -z "${type}" ]] && type="--wildcard-only"
|
||||||
|
domain="${1}"
|
||||||
|
|
||||||
|
[[ "${wildcard}" == true ]] && domain="(^|\\.)${domain//\./\\.}$"
|
||||||
|
|
||||||
|
bool=true
|
||||||
|
# Is it in the list?
|
||||||
|
grep -Fx "${domain}" "${regexlist}" > /dev/null 2>&1 || bool=false
|
||||||
|
if [[ "${bool}" == true ]]; then
|
||||||
|
# Remove it from the other one
|
||||||
|
echo -e " ${INFO} Removing $domain from regex list..."
|
||||||
|
local lineNumber
|
||||||
|
lineNumber=$(grep -Fnx "$domain" "${list}" | cut -f1 -d:)
|
||||||
|
sed -i "${lineNumber}d" "${list}"
|
||||||
|
reload=true
|
||||||
|
else
|
||||||
|
if [[ "${verbose}" == true ]]; then
|
||||||
|
echo -e " ${INFO} ${domain} does not exist in regex list, no need to remove!"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Update Gravity
|
||||||
|
Reload() {
|
||||||
|
echo ""
|
||||||
|
pihole -g --skip-download "${type:-}"
|
||||||
}
|
}
|
||||||
|
|
||||||
Displaylist() {
|
Displaylist() {
|
||||||
local count num_pipes domain enabled status nicedate requestedListname
|
if [[ -f ${listMain} ]]; then
|
||||||
|
if [[ "${listMain}" == "${whitelist}" ]]; then
|
||||||
requestedListname="$(GetListnameFromTypeId "${typeId}")"
|
string="gravity resistant domains"
|
||||||
data="$(sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)"
|
|
||||||
|
|
||||||
if [[ -z $data ]]; then
|
|
||||||
echo -e "Not showing empty list"
|
|
||||||
else
|
else
|
||||||
echo -e "Displaying ${requestedListname}:"
|
string="domains caught in the sinkhole"
|
||||||
count=1
|
|
||||||
while IFS= read -r line
|
|
||||||
do
|
|
||||||
# Count number of pipes seen in this line
|
|
||||||
# This is necessary because we can only detect the pipe separating the fields
|
|
||||||
# from the end backwards as the domain (which is the first field) may contain
|
|
||||||
# pipe symbols as they are perfectly valid regex filter control characters
|
|
||||||
num_pipes="$(grep -c "^" <<< "$(grep -o "|" <<< "${line}")")"
|
|
||||||
|
|
||||||
# Extract domain and enabled status based on the obtained number of pipe characters
|
|
||||||
domain="$(cut -d'|' -f"-$((num_pipes-1))" <<< "${line}")"
|
|
||||||
enabled="$(cut -d'|' -f"$((num_pipes))" <<< "${line}")"
|
|
||||||
datemod="$(cut -d'|' -f"$((num_pipes+1))" <<< "${line}")"
|
|
||||||
|
|
||||||
# Translate boolean status into human readable string
|
|
||||||
if [[ "${enabled}" -eq 1 ]]; then
|
|
||||||
status="enabled"
|
|
||||||
else
|
|
||||||
status="disabled"
|
|
||||||
fi
|
fi
|
||||||
|
verbose=false
|
||||||
# Get nice representation of numerical date stored in database
|
echo -e "Displaying $string:\n"
|
||||||
nicedate=$(date --rfc-2822 -d "@${datemod}")
|
count=1
|
||||||
|
while IFS= read -r RD || [ -n "${RD}" ]; do
|
||||||
echo " ${count}: ${domain} (${status}, last modified ${nicedate})"
|
echo " ${count}: ${RD}"
|
||||||
count=$((count+1))
|
count=$((count+1))
|
||||||
done <<< "${data}"
|
done < "${listMain}"
|
||||||
|
else
|
||||||
|
echo -e " ${COL_LIGHT_RED}${listMain} does not exist!${COL_NC}"
|
||||||
fi
|
fi
|
||||||
exit 0;
|
exit 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
NukeList() {
|
NukeList() {
|
||||||
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};"
|
if [[ -f "${listMain}" ]]; then
|
||||||
}
|
# Back up original list
|
||||||
|
cp "${listMain}" "${listMain}.bck~"
|
||||||
GetComment() {
|
# Empty out file
|
||||||
comment="$1"
|
echo "" > "${listMain}"
|
||||||
if [[ "${comment}" =~ [^a-zA-Z0-9_\#:/\.,\ -] ]]; then
|
|
||||||
echo " ${CROSS} Found invalid characters in domain comment!"
|
|
||||||
exit
|
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
while (( "$#" )); do
|
for var in "$@"; do
|
||||||
case "${1}" in
|
case "${var}" in
|
||||||
"-w" | "whitelist" ) typeId=0;;
|
"-w" | "whitelist" ) listMain="${whitelist}"; listAlt="${blacklist}";;
|
||||||
"-b" | "blacklist" ) typeId=1;;
|
"-b" | "blacklist" ) listMain="${blacklist}"; listAlt="${whitelist}";;
|
||||||
"--white-regex" | "white-regex" ) typeId=2;;
|
"--wild" | "wildcard" ) listMain="${regexlist}"; wildcard=true;;
|
||||||
"--white-wild" | "white-wild" ) typeId=2; wildcard=true;;
|
"--regex" | "regex" ) listMain="${regexlist}";;
|
||||||
"--wild" | "wildcard" ) typeId=3; wildcard=true;;
|
|
||||||
"--regex" | "regex" ) typeId=3;;
|
|
||||||
"-nr"| "--noreload" ) reload=false;;
|
"-nr"| "--noreload" ) reload=false;;
|
||||||
"-d" | "--delmode" ) addmode=false;;
|
"-d" | "--delmode" ) addmode=false;;
|
||||||
"-q" | "--quiet" ) verbose=false;;
|
"-q" | "--quiet" ) verbose=false;;
|
||||||
"-h" | "--help" ) helpFunc;;
|
"-h" | "--help" ) helpFunc;;
|
||||||
"-l" | "--list" ) Displaylist;;
|
"-l" | "--list" ) Displaylist;;
|
||||||
"--nuke" ) NukeList;;
|
"--nuke" ) NukeList;;
|
||||||
"--web" ) web=true;;
|
* ) HandleOther "${var}";;
|
||||||
"--comment" ) GetComment "${2}"; shift;;
|
|
||||||
* ) ValidateDomain "${1}";;
|
|
||||||
esac
|
esac
|
||||||
shift
|
|
||||||
done
|
done
|
||||||
|
|
||||||
shift
|
shift
|
||||||
|
|
||||||
if [[ ${domaincount} == 0 ]]; then
|
if [[ $# = 0 ]]; then
|
||||||
helpFunc
|
helpFunc
|
||||||
fi
|
fi
|
||||||
|
|
||||||
ProcessDomainList
|
PoplistFile
|
||||||
|
|
||||||
# Used on web interface
|
|
||||||
if $web; then
|
|
||||||
echo "DONE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${reload}" != false ]]; then
|
if [[ "${reload}" != false ]]; then
|
||||||
pihole restartdns reload-lists
|
# Ensure that "restart" is used for Wildcard updates
|
||||||
|
Reload "${reload}"
|
||||||
fi
|
fi
|
||||||
|
@@ -1,23 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
|
||||||
# (c) 2020 Pi-hole, LLC (https://pi-hole.net)
|
|
||||||
# Network-wide ad blocking via your own hardware.
|
|
||||||
#
|
|
||||||
# This file is copyright under the latest version of the EUPL.
|
|
||||||
# Please see LICENSE file for your rights under this license.
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# The pihole disable command has the option to set a specified time before
|
|
||||||
# blocking is automatically re-enabled.
|
|
||||||
#
|
|
||||||
# Present script is responsible for the sleep & re-enable part of the job and
|
|
||||||
# is automatically terminated if it is still running when pihole is enabled by
|
|
||||||
# other means.
|
|
||||||
#
|
|
||||||
# This ensures that pihole ends up in the correct state after a sequence of
|
|
||||||
# commands suchs as: `pihole disable 30s; pihole enable; pihole disable`
|
|
||||||
|
|
||||||
readonly PI_HOLE_BIN_DIR="/usr/local/bin"
|
|
||||||
|
|
||||||
sleep "${1}"
|
|
||||||
"${PI_HOLE_BIN_DIR}"/pihole enable
|
|
@@ -1,66 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# shellcheck disable=SC1090
|
|
||||||
|
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
|
||||||
# (c) 2019 Pi-hole, LLC (https://pi-hole.net)
|
|
||||||
# Network-wide ad blocking via your own hardware.
|
|
||||||
#
|
|
||||||
# ARP table interaction
|
|
||||||
#
|
|
||||||
# This file is copyright under the latest version of the EUPL.
|
|
||||||
# Please see LICENSE file for your rights under this license.
|
|
||||||
|
|
||||||
coltable="/opt/pihole/COL_TABLE"
|
|
||||||
if [[ -f ${coltable} ]]; then
|
|
||||||
source ${coltable}
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Determine database location
|
|
||||||
# Obtain DBFILE=... setting from pihole-FTL.db
|
|
||||||
# Constructed to return nothing when
|
|
||||||
# a) the setting is not present in the config file, or
|
|
||||||
# b) the setting is commented out (e.g. "#DBFILE=...")
|
|
||||||
FTLconf="/etc/pihole/pihole-FTL.conf"
|
|
||||||
if [ -e "$FTLconf" ]; then
|
|
||||||
DBFILE="$(sed -n -e 's/^\s*DBFILE\s*=\s*//p' ${FTLconf})"
|
|
||||||
fi
|
|
||||||
# Test for empty string. Use standard path in this case.
|
|
||||||
if [ -z "$DBFILE" ]; then
|
|
||||||
DBFILE="/etc/pihole/pihole-FTL.db"
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
flushARP(){
|
|
||||||
local output
|
|
||||||
if [[ "${args[1]}" != "quiet" ]]; then
|
|
||||||
echo -ne " ${INFO} Flushing network table ..."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Truncate network_addresses table in pihole-FTL.db
|
|
||||||
# This needs to be done before we can truncate the network table due to
|
|
||||||
# foreign key contraints
|
|
||||||
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to truncate network_addresses table"
|
|
||||||
echo " Database location: ${DBFILE}"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Truncate network table in pihole-FTL.db
|
|
||||||
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network" 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to truncate network table"
|
|
||||||
echo " Database location: ${DBFILE}"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${args[1]}" != "quiet" ]]; then
|
|
||||||
echo -e "${OVER} ${TICK} Flushed network table"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
args=("$@")
|
|
||||||
|
|
||||||
case "${args[0]}" in
|
|
||||||
"arpflush" ) flushARP;;
|
|
||||||
esac
|
|
@@ -3,7 +3,7 @@
|
|||||||
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
||||||
# Network-wide ad blocking via your own hardware.
|
# Network-wide ad blocking via your own hardware.
|
||||||
#
|
#
|
||||||
# Switch Pi-hole subsystems to a different GitHub branch.
|
# Switch Pi-hole subsystems to a different Github branch.
|
||||||
#
|
#
|
||||||
# This file is copyright under the latest version of the EUPL.
|
# This file is copyright under the latest version of the EUPL.
|
||||||
# Please see LICENSE file for your rights under this license.
|
# Please see LICENSE file for your rights under this license.
|
||||||
@@ -36,7 +36,7 @@ warning1() {
|
|||||||
return 0
|
return 0
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
echo -e "\\n ${INFO} Branch change has been canceled"
|
echo -e "\\n ${INFO} Branch change has been cancelled"
|
||||||
return 1
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
@@ -46,12 +46,6 @@ checkout() {
|
|||||||
local corebranches
|
local corebranches
|
||||||
local webbranches
|
local webbranches
|
||||||
|
|
||||||
# Check if FTL is installed - do this early on as FTL is a hard dependency for Pi-hole
|
|
||||||
local funcOutput
|
|
||||||
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
|
||||||
local binary
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
|
||||||
|
|
||||||
# Avoid globbing
|
# Avoid globbing
|
||||||
set -f
|
set -f
|
||||||
|
|
||||||
@@ -84,7 +78,7 @@ checkout() {
|
|||||||
echo -e " ${INFO} Shortcut \"dev\" detected - checking out development / devel branches..."
|
echo -e " ${INFO} Shortcut \"dev\" detected - checking out development / devel branches..."
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${INFO} Pi-hole Core"
|
echo -e " ${INFO} Pi-hole Core"
|
||||||
fetch_checkout_pull_branch "${PI_HOLE_FILES_DIR}" "development" || { echo " ${CROSS} Unable to pull Core development branch"; exit 1; }
|
fetch_checkout_pull_branch "${PI_HOLE_FILES_DIR}" "development" || { echo " ${CROSS} Unable to pull Core developement branch"; exit 1; }
|
||||||
if [[ "${INSTALL_WEB_INTERFACE}" == "true" ]]; then
|
if [[ "${INSTALL_WEB_INTERFACE}" == "true" ]]; then
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${INFO} Web interface"
|
echo -e " ${INFO} Web interface"
|
||||||
@@ -92,10 +86,10 @@ checkout() {
|
|||||||
fi
|
fi
|
||||||
#echo -e " ${TICK} Pi-hole Core"
|
#echo -e " ${TICK} Pi-hole Core"
|
||||||
|
|
||||||
|
get_binary_name
|
||||||
local path
|
local path
|
||||||
path="development/${binary}"
|
path="development/${binary}"
|
||||||
echo "development" > /etc/pihole/ftlbranch
|
echo "development" > /etc/pihole/ftlbranch
|
||||||
chmod 644 /etc/pihole/ftlbranch
|
|
||||||
elif [[ "${1}" == "master" ]] ; then
|
elif [[ "${1}" == "master" ]] ; then
|
||||||
# Shortcut to check out master branches
|
# Shortcut to check out master branches
|
||||||
echo -e " ${INFO} Shortcut \"master\" detected - checking out master branches..."
|
echo -e " ${INFO} Shortcut \"master\" detected - checking out master branches..."
|
||||||
@@ -106,10 +100,10 @@ checkout() {
|
|||||||
fetch_checkout_pull_branch "${webInterfaceDir}" "master" || { echo " ${CROSS} Unable to pull Web master branch"; exit 1; }
|
fetch_checkout_pull_branch "${webInterfaceDir}" "master" || { echo " ${CROSS} Unable to pull Web master branch"; exit 1; }
|
||||||
fi
|
fi
|
||||||
#echo -e " ${TICK} Web Interface"
|
#echo -e " ${TICK} Web Interface"
|
||||||
|
get_binary_name
|
||||||
local path
|
local path
|
||||||
path="master/${binary}"
|
path="master/${binary}"
|
||||||
echo "master" > /etc/pihole/ftlbranch
|
echo "master" > /etc/pihole/ftlbranch
|
||||||
chmod 644 /etc/pihole/ftlbranch
|
|
||||||
elif [[ "${1}" == "core" ]] ; then
|
elif [[ "${1}" == "core" ]] ; then
|
||||||
str="Fetching branches from ${piholeGitUrl}"
|
str="Fetching branches from ${piholeGitUrl}"
|
||||||
echo -ne " ${INFO} $str"
|
echo -ne " ${INFO} $str"
|
||||||
@@ -165,13 +159,13 @@ checkout() {
|
|||||||
fi
|
fi
|
||||||
checkout_pull_branch "${webInterfaceDir}" "${2}"
|
checkout_pull_branch "${webInterfaceDir}" "${2}"
|
||||||
elif [[ "${1}" == "ftl" ]] ; then
|
elif [[ "${1}" == "ftl" ]] ; then
|
||||||
|
get_binary_name
|
||||||
local path
|
local path
|
||||||
path="${2}/${binary}"
|
path="${2}/${binary}"
|
||||||
|
|
||||||
if check_download_exists "$path"; then
|
if check_download_exists "$path"; then
|
||||||
echo " ${TICK} Branch ${2} exists"
|
echo " ${TICK} Branch ${2} exists"
|
||||||
echo "${2}" > /etc/pihole/ftlbranch
|
echo "${2}" > /etc/pihole/ftlbranch
|
||||||
chmod 644 /etc/pihole/ftlbranch
|
|
||||||
FTLinstall "${binary}"
|
FTLinstall "${binary}"
|
||||||
restart_service pihole-FTL
|
restart_service pihole-FTL
|
||||||
enable_service pihole-FTL
|
enable_service pihole-FTL
|
||||||
|
@@ -46,8 +46,8 @@ OBFUSCATED_PLACEHOLDER="<DOMAIN OBFUSCATED>"
|
|||||||
# FAQ URLs for use in showing the debug log
|
# FAQ URLs for use in showing the debug log
|
||||||
FAQ_UPDATE_PI_HOLE="${COL_CYAN}https://discourse.pi-hole.net/t/how-do-i-update-pi-hole/249${COL_NC}"
|
FAQ_UPDATE_PI_HOLE="${COL_CYAN}https://discourse.pi-hole.net/t/how-do-i-update-pi-hole/249${COL_NC}"
|
||||||
FAQ_CHECKOUT_COMMAND="${COL_CYAN}https://discourse.pi-hole.net/t/the-pihole-command-with-examples/738#checkout${COL_NC}"
|
FAQ_CHECKOUT_COMMAND="${COL_CYAN}https://discourse.pi-hole.net/t/the-pihole-command-with-examples/738#checkout${COL_NC}"
|
||||||
FAQ_HARDWARE_REQUIREMENTS="${COL_CYAN}https://docs.pi-hole.net/main/prerequisites/${COL_NC}"
|
FAQ_HARDWARE_REQUIREMENTS="${COL_CYAN}https://discourse.pi-hole.net/t/hardware-software-requirements/273${COL_NC}"
|
||||||
FAQ_HARDWARE_REQUIREMENTS_PORTS="${COL_CYAN}https://docs.pi-hole.net/main/prerequisites/#ports${COL_NC}"
|
FAQ_HARDWARE_REQUIREMENTS_PORTS="${COL_CYAN}https://discourse.pi-hole.net/t/hardware-software-requirements/273#ports${COL_NC}"
|
||||||
FAQ_GATEWAY="${COL_CYAN}https://discourse.pi-hole.net/t/why-is-a-default-gateway-important-for-pi-hole/3546${COL_NC}"
|
FAQ_GATEWAY="${COL_CYAN}https://discourse.pi-hole.net/t/why-is-a-default-gateway-important-for-pi-hole/3546${COL_NC}"
|
||||||
FAQ_ULA="${COL_CYAN}https://discourse.pi-hole.net/t/use-ipv6-ula-addresses-for-pi-hole/2127${COL_NC}"
|
FAQ_ULA="${COL_CYAN}https://discourse.pi-hole.net/t/use-ipv6-ula-addresses-for-pi-hole/2127${COL_NC}"
|
||||||
FAQ_FTL_COMPATIBILITY="${COL_CYAN}https://github.com/pi-hole/FTL#compatibility-list${COL_NC}"
|
FAQ_FTL_COMPATIBILITY="${COL_CYAN}https://github.com/pi-hole/FTL#compatibility-list${COL_NC}"
|
||||||
@@ -70,7 +70,7 @@ PIHOLE_DIRECTORY="/etc/pihole"
|
|||||||
PIHOLE_SCRIPTS_DIRECTORY="/opt/pihole"
|
PIHOLE_SCRIPTS_DIRECTORY="/opt/pihole"
|
||||||
BIN_DIRECTORY="/usr/local/bin"
|
BIN_DIRECTORY="/usr/local/bin"
|
||||||
RUN_DIRECTORY="/run"
|
RUN_DIRECTORY="/run"
|
||||||
LOG_DIRECTORY="/var/log/pihole"
|
LOG_DIRECTORY="/var/log"
|
||||||
WEB_SERVER_LOG_DIRECTORY="${LOG_DIRECTORY}/lighttpd"
|
WEB_SERVER_LOG_DIRECTORY="${LOG_DIRECTORY}/lighttpd"
|
||||||
WEB_SERVER_CONFIG_DIRECTORY="/etc/lighttpd"
|
WEB_SERVER_CONFIG_DIRECTORY="/etc/lighttpd"
|
||||||
HTML_DIRECTORY="/var/www/html"
|
HTML_DIRECTORY="/var/www/html"
|
||||||
@@ -87,42 +87,18 @@ PIHOLE_DHCP_CONFIG_FILE="${DNSMASQ_D_DIRECTORY}/02-pihole-dhcp.conf"
|
|||||||
PIHOLE_WILDCARD_CONFIG_FILE="${DNSMASQ_D_DIRECTORY}/03-wildcard.conf"
|
PIHOLE_WILDCARD_CONFIG_FILE="${DNSMASQ_D_DIRECTORY}/03-wildcard.conf"
|
||||||
|
|
||||||
WEB_SERVER_CONFIG_FILE="${WEB_SERVER_CONFIG_DIRECTORY}/lighttpd.conf"
|
WEB_SERVER_CONFIG_FILE="${WEB_SERVER_CONFIG_DIRECTORY}/lighttpd.conf"
|
||||||
WEB_SERVER_CUSTOM_CONFIG_FILE="${WEB_SERVER_CONFIG_DIRECTORY}/external.conf"
|
#WEB_SERVER_CUSTOM_CONFIG_FILE="${WEB_SERVER_CONFIG_DIRECTORY}/external.conf"
|
||||||
|
|
||||||
|
PIHOLE_DEFAULT_AD_LISTS="${PIHOLE_DIRECTORY}/adlists.default"
|
||||||
|
PIHOLE_USER_DEFINED_AD_LISTS="${PIHOLE_DIRECTORY}/adlists.list"
|
||||||
|
PIHOLE_BLACKLIST_FILE="${PIHOLE_DIRECTORY}/blacklist.txt"
|
||||||
|
PIHOLE_BLOCKLIST_FILE="${PIHOLE_DIRECTORY}/gravity.list"
|
||||||
PIHOLE_INSTALL_LOG_FILE="${PIHOLE_DIRECTORY}/install.log"
|
PIHOLE_INSTALL_LOG_FILE="${PIHOLE_DIRECTORY}/install.log"
|
||||||
PIHOLE_RAW_BLOCKLIST_FILES="${PIHOLE_DIRECTORY}/list.*"
|
PIHOLE_RAW_BLOCKLIST_FILES="${PIHOLE_DIRECTORY}/list.*"
|
||||||
PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list"
|
PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list"
|
||||||
PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate"
|
PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate"
|
||||||
PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf"
|
PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf"
|
||||||
PIHOLE_FTL_CONF_FILE="${PIHOLE_DIRECTORY}/pihole-FTL.conf"
|
PIHOLE_WHITELIST_FILE="${PIHOLE_DIRECTORY}/whitelist.txt"
|
||||||
|
|
||||||
# Read the value of an FTL config key. The value is printed to stdout.
|
|
||||||
#
|
|
||||||
# Args:
|
|
||||||
# 1. The key to read
|
|
||||||
# 2. The default if the setting or config does not exist
|
|
||||||
get_ftl_conf_value() {
|
|
||||||
local key=$1
|
|
||||||
local default=$2
|
|
||||||
local value
|
|
||||||
|
|
||||||
# Obtain key=... setting from pihole-FTL.conf
|
|
||||||
if [[ -e "$PIHOLE_FTL_CONF_FILE" ]]; then
|
|
||||||
# Constructed to return nothing when
|
|
||||||
# a) the setting is not present in the config file, or
|
|
||||||
# b) the setting is commented out (e.g. "#DBFILE=...")
|
|
||||||
value="$(sed -n -e "s/^\\s*$key=\\s*//p" ${PIHOLE_FTL_CONF_FILE})"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Test for missing value. Use default value in this case.
|
|
||||||
if [[ -z "$value" ]]; then
|
|
||||||
value="$default"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$value"
|
|
||||||
}
|
|
||||||
|
|
||||||
PIHOLE_GRAVITY_DB_FILE="$(get_ftl_conf_value "GRAVITYDB" "${PIHOLE_DIRECTORY}/gravity.db")"
|
|
||||||
|
|
||||||
PIHOLE_COMMAND="${BIN_DIRECTORY}/pihole"
|
PIHOLE_COMMAND="${BIN_DIRECTORY}/pihole"
|
||||||
PIHOLE_COLTABLE_FILE="${BIN_DIRECTORY}/COL_TABLE"
|
PIHOLE_COLTABLE_FILE="${BIN_DIRECTORY}/COL_TABLE"
|
||||||
@@ -133,12 +109,12 @@ FTL_PORT="${RUN_DIRECTORY}/pihole-FTL.port"
|
|||||||
PIHOLE_LOG="${LOG_DIRECTORY}/pihole.log"
|
PIHOLE_LOG="${LOG_DIRECTORY}/pihole.log"
|
||||||
PIHOLE_LOG_GZIPS="${LOG_DIRECTORY}/pihole.log.[0-9].*"
|
PIHOLE_LOG_GZIPS="${LOG_DIRECTORY}/pihole.log.[0-9].*"
|
||||||
PIHOLE_DEBUG_LOG="${LOG_DIRECTORY}/pihole_debug.log"
|
PIHOLE_DEBUG_LOG="${LOG_DIRECTORY}/pihole_debug.log"
|
||||||
PIHOLE_FTL_LOG="$(get_ftl_conf_value "LOGFILE" "${LOG_DIRECTORY}/pihole-FTL.log")"
|
PIHOLE_FTL_LOG="${LOG_DIRECTORY}/pihole-FTL.log"
|
||||||
|
|
||||||
PIHOLE_WEB_SERVER_ACCESS_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/access.log"
|
PIHOLE_WEB_SERVER_ACCESS_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/access.log"
|
||||||
PIHOLE_WEB_SERVER_ERROR_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/error.log"
|
PIHOLE_WEB_SERVER_ERROR_LOG_FILE="${WEB_SERVER_LOG_DIRECTORY}/error.log"
|
||||||
|
|
||||||
# An array of operating system "pretty names" that we officially support
|
# An array of operating system "pretty names" that we officialy support
|
||||||
# We can loop through the array at any time to see if it matches a value
|
# We can loop through the array at any time to see if it matches a value
|
||||||
#SUPPORTED_OS=("Raspbian" "Ubuntu" "Fedora" "Debian" "CentOS")
|
#SUPPORTED_OS=("Raspbian" "Ubuntu" "Fedora" "Debian" "CentOS")
|
||||||
|
|
||||||
@@ -166,13 +142,16 @@ REQUIRED_FILES=("${PIHOLE_CRON_FILE}"
|
|||||||
"${PIHOLE_DHCP_CONFIG_FILE}"
|
"${PIHOLE_DHCP_CONFIG_FILE}"
|
||||||
"${PIHOLE_WILDCARD_CONFIG_FILE}"
|
"${PIHOLE_WILDCARD_CONFIG_FILE}"
|
||||||
"${WEB_SERVER_CONFIG_FILE}"
|
"${WEB_SERVER_CONFIG_FILE}"
|
||||||
"${WEB_SERVER_CUSTOM_CONFIG_FILE}"
|
"${PIHOLE_DEFAULT_AD_LISTS}"
|
||||||
|
"${PIHOLE_USER_DEFINED_AD_LISTS}"
|
||||||
|
"${PIHOLE_BLACKLIST_FILE}"
|
||||||
|
"${PIHOLE_BLOCKLIST_FILE}"
|
||||||
"${PIHOLE_INSTALL_LOG_FILE}"
|
"${PIHOLE_INSTALL_LOG_FILE}"
|
||||||
"${PIHOLE_RAW_BLOCKLIST_FILES}"
|
"${PIHOLE_RAW_BLOCKLIST_FILES}"
|
||||||
"${PIHOLE_LOCAL_HOSTS_FILE}"
|
"${PIHOLE_LOCAL_HOSTS_FILE}"
|
||||||
"${PIHOLE_LOGROTATE_FILE}"
|
"${PIHOLE_LOGROTATE_FILE}"
|
||||||
"${PIHOLE_SETUP_VARS_FILE}"
|
"${PIHOLE_SETUP_VARS_FILE}"
|
||||||
"${PIHOLE_FTL_CONF_FILE}"
|
"${PIHOLE_WHITELIST_FILE}"
|
||||||
"${PIHOLE_COMMAND}"
|
"${PIHOLE_COMMAND}"
|
||||||
"${PIHOLE_COLTABLE_FILE}"
|
"${PIHOLE_COLTABLE_FILE}"
|
||||||
"${FTL_PID}"
|
"${FTL_PID}"
|
||||||
@@ -298,15 +277,11 @@ compare_local_version_to_git_version() {
|
|||||||
log_write "${INFO} ${pihole_component}: ${COL_YELLOW}${remote_version:-Untagged}${COL_NC} (${FAQ_UPDATE_PI_HOLE})"
|
log_write "${INFO} ${pihole_component}: ${COL_YELLOW}${remote_version:-Untagged}${COL_NC} (${FAQ_UPDATE_PI_HOLE})"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Print the repo upstreams
|
|
||||||
remotes=$(git remote -v)
|
|
||||||
log_write "${INFO} Remotes: ${remotes//$'\n'/'\n '}"
|
|
||||||
|
|
||||||
# If the repo is on the master branch, they are on the stable codebase
|
# If the repo is on the master branch, they are on the stable codebase
|
||||||
if [[ "${remote_branch}" == "master" ]]; then
|
if [[ "${remote_branch}" == "master" ]]; then
|
||||||
# so the color of the text is green
|
# so the color of the text is green
|
||||||
log_write "${INFO} Branch: ${COL_GREEN}${remote_branch}${COL_NC}"
|
log_write "${INFO} Branch: ${COL_GREEN}${remote_branch}${COL_NC}"
|
||||||
# If it is any other branch, they are in a development branch
|
# If it is any other branch, they are in a developement branch
|
||||||
else
|
else
|
||||||
# So show that in yellow, signifying it's something to take a look at, but not a critical error
|
# So show that in yellow, signifying it's something to take a look at, but not a critical error
|
||||||
log_write "${INFO} Branch: ${COL_YELLOW}${remote_branch:-Detached}${COL_NC} (${FAQ_CHECKOUT_COMMAND})"
|
log_write "${INFO} Branch: ${COL_YELLOW}${remote_branch:-Detached}${COL_NC} (${FAQ_CHECKOUT_COMMAND})"
|
||||||
@@ -363,7 +338,7 @@ check_component_versions() {
|
|||||||
|
|
||||||
get_program_version() {
|
get_program_version() {
|
||||||
local program_name="${1}"
|
local program_name="${1}"
|
||||||
# Create a local variable so this function can be safely reused
|
# Create a loval variable so this function can be safely reused
|
||||||
local program_version
|
local program_version
|
||||||
echo_current_diagnostic "${program_name} version"
|
echo_current_diagnostic "${program_name} version"
|
||||||
# Evalutate the program we are checking, if it is any of the ones below, show the version
|
# Evalutate the program we are checking, if it is any of the ones below, show the version
|
||||||
@@ -393,58 +368,53 @@ check_critical_program_versions() {
|
|||||||
get_program_version "php"
|
get_program_version "php"
|
||||||
}
|
}
|
||||||
|
|
||||||
os_check() {
|
is_os_supported() {
|
||||||
# This function gets a list of supported OS versions from a TXT record at versions.pi-hole.net
|
local os_to_check="${1}"
|
||||||
# and determines whether or not the script is running on one of those systems
|
# Strip just the base name of the system using sed
|
||||||
local remote_os_domain valid_os valid_version detected_os detected_version cmdResult digReturnCode response
|
# shellcheck disable=SC2001
|
||||||
remote_os_domain="versions.pi-hole.net"
|
the_os=$(echo "${os_to_check}" | sed 's/ .*//')
|
||||||
|
# If the variable is one of our supported OSes,
|
||||||
|
case "${the_os}" in
|
||||||
|
# Print it in green
|
||||||
|
"Raspbian") log_write "${TICK} ${COL_GREEN}${os_to_check}${COL_NC}";;
|
||||||
|
"Ubuntu") log_write "${TICK} ${COL_GREEN}${os_to_check}${COL_NC}";;
|
||||||
|
"Fedora") log_write "${TICK} ${COL_GREEN}${os_to_check}${COL_NC}";;
|
||||||
|
"Debian") log_write "${TICK} ${COL_GREEN}${os_to_check}${COL_NC}";;
|
||||||
|
"CentOS") log_write "${TICK} ${COL_GREEN}${os_to_check}${COL_NC}";;
|
||||||
|
# If not, show it in red and link to our software requirements page
|
||||||
|
*) log_write "${CROSS} ${COL_RED}${os_to_check}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS})";
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
detected_os=$(grep "\bID\b" /etc/os-release | cut -d '=' -f2 | tr -d '"')
|
get_distro_attributes() {
|
||||||
detected_version=$(grep VERSION_ID /etc/os-release | cut -d '=' -f2 | tr -d '"')
|
# Put the current Internal Field Separator into another variable so it can be restored later
|
||||||
|
OLD_IFS="$IFS"
|
||||||
|
# Store the distro info in an array and make it global since the OS won't change,
|
||||||
|
# but we'll keep it within the function for better unit testing
|
||||||
|
local distro_info
|
||||||
|
#shellcheck disable=SC2016
|
||||||
|
IFS=$'\r\n' command eval 'distro_info=( $(cat /etc/*release) )'
|
||||||
|
|
||||||
cmdResult="$(dig +short -t txt ${remote_os_domain} @ns1.pi-hole.net 2>&1; echo $?)"
|
# Set a named variable for better readability
|
||||||
#Get the return code of the previous command (last line)
|
local distro_attribute
|
||||||
digReturnCode="${cmdResult##*$'\n'}"
|
# For each line found in an /etc/*release file,
|
||||||
|
for distro_attribute in "${distro_info[@]}"; do
|
||||||
# Extract dig response
|
# store the key in a variable
|
||||||
response="${cmdResult%%$'\n'*}"
|
local pretty_name_key
|
||||||
|
pretty_name_key=$(echo "${distro_attribute}" | grep "PRETTY_NAME" | cut -d '=' -f1)
|
||||||
IFS=" " read -r -a supportedOS < <(echo "${response}" | tr -d '"')
|
# we need just the OS PRETTY_NAME,
|
||||||
for distro_and_versions in "${supportedOS[@]}"
|
if [[ "${pretty_name_key}" == "PRETTY_NAME" ]]; then
|
||||||
do
|
# so save in in a variable when we find it
|
||||||
distro_part="${distro_and_versions%%=*}"
|
PRETTY_NAME_VALUE=$(echo "${distro_attribute}" | grep "PRETTY_NAME" | cut -d '=' -f2- | tr -d '"')
|
||||||
versions_part="${distro_and_versions##*=}"
|
# then pass it as an argument that checks if the OS is supported
|
||||||
|
is_os_supported "${PRETTY_NAME_VALUE}"
|
||||||
if [[ "${detected_os^^}" =~ ${distro_part^^} ]]; then
|
else
|
||||||
valid_os=true
|
# Since we only need the pretty name, we can just skip over anything that is not a match
|
||||||
IFS="," read -r -a supportedVer <<<"${versions_part}"
|
:
|
||||||
for version in "${supportedVer[@]}"
|
|
||||||
do
|
|
||||||
if [[ "${detected_version}" =~ $version ]]; then
|
|
||||||
valid_version=true
|
|
||||||
break
|
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
break
|
# Set the IFS back to what it was
|
||||||
fi
|
IFS="$OLD_IFS"
|
||||||
done
|
|
||||||
|
|
||||||
log_write "${INFO} dig return code: ${digReturnCode}"
|
|
||||||
log_write "${INFO} dig response: ${response}"
|
|
||||||
|
|
||||||
if [ "$valid_os" = true ]; then
|
|
||||||
log_write "${TICK} Distro: ${COL_GREEN}${detected_os^}${COL_NC}"
|
|
||||||
|
|
||||||
if [ "$valid_version" = true ]; then
|
|
||||||
log_write "${TICK} Version: ${COL_GREEN}${detected_version}${COL_NC}"
|
|
||||||
else
|
|
||||||
log_write "${CROSS} Version: ${COL_RED}${detected_version}${COL_NC}"
|
|
||||||
log_write "${CROSS} Error: ${COL_RED}${detected_os^} is supported but version ${detected_version} is currently unsupported (${FAQ_HARDWARE_REQUIREMENTS})${COL_NC}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
log_write "${CROSS} Distro: ${COL_RED}${detected_os^}${COL_NC}"
|
|
||||||
log_write "${CROSS} Error: ${COL_RED}${detected_os^} is not a supported distro (${FAQ_HARDWARE_REQUIREMENTS})${COL_NC}"
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
diagnose_operating_system() {
|
diagnose_operating_system() {
|
||||||
@@ -456,7 +426,7 @@ diagnose_operating_system() {
|
|||||||
# If there is a /etc/*release file, it's probably a supported operating system, so we can
|
# If there is a /etc/*release file, it's probably a supported operating system, so we can
|
||||||
if ls /etc/*release 1> /dev/null 2>&1; then
|
if ls /etc/*release 1> /dev/null 2>&1; then
|
||||||
# display the attributes to the user from the function made earlier
|
# display the attributes to the user from the function made earlier
|
||||||
os_check
|
get_distro_attributes
|
||||||
else
|
else
|
||||||
# If it doesn't exist, it's not a system we currently support and link to FAQ
|
# If it doesn't exist, it's not a system we currently support and link to FAQ
|
||||||
log_write "${CROSS} ${COL_RED}${error_msg}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS})"
|
log_write "${CROSS} ${COL_RED}${error_msg}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS})"
|
||||||
@@ -673,21 +643,19 @@ ping_internet() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
compare_port_to_service_assigned() {
|
compare_port_to_service_assigned() {
|
||||||
local service_name
|
local service_name="${1}"
|
||||||
local expected_service
|
# The programs we use may change at some point, so they are in a varible here
|
||||||
local port
|
local resolver="pihole-FTL"
|
||||||
|
local web_server="lighttpd"
|
||||||
service_name="${2}"
|
local ftl="pihole-FTL"
|
||||||
expected_service="${1}"
|
|
||||||
port="${3}"
|
|
||||||
|
|
||||||
# If the service is a Pi-hole service, highlight it in green
|
# If the service is a Pi-hole service, highlight it in green
|
||||||
if [[ "${service_name}" == "${expected_service}" ]]; then
|
if [[ "${service_name}" == "${resolver}" ]] || [[ "${service_name}" == "${web_server}" ]] || [[ "${service_name}" == "${ftl}" ]]; then
|
||||||
log_write "[${COL_GREEN}${port}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}"
|
log_write "[${COL_GREEN}${port_number}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}"
|
||||||
# Otherwise,
|
# Otherwise,
|
||||||
else
|
else
|
||||||
# Show the service name in red since it's non-standard
|
# Show the service name in red since it's non-standard
|
||||||
log_write "[${COL_RED}${port}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
|
log_write "[${COL_RED}${port_number}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -721,11 +689,11 @@ check_required_ports() {
|
|||||||
fi
|
fi
|
||||||
# Use a case statement to determine if the right services are using the right ports
|
# Use a case statement to determine if the right services are using the right ports
|
||||||
case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in
|
case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in
|
||||||
53) compare_port_to_service_assigned "${resolver}" "${service_name}" 53
|
53) compare_port_to_service_assigned "${resolver}"
|
||||||
;;
|
;;
|
||||||
80) compare_port_to_service_assigned "${web_server}" "${service_name}" 80
|
80) compare_port_to_service_assigned "${web_server}"
|
||||||
;;
|
;;
|
||||||
4711) compare_port_to_service_assigned "${ftl}" "${service_name}" 4711
|
4711) compare_port_to_service_assigned "${ftl}"
|
||||||
;;
|
;;
|
||||||
# If it's not a default port that Pi-hole needs, just print it out for the user to see
|
# If it's not a default port that Pi-hole needs, just print it out for the user to see
|
||||||
*) log_write "${port_number} ${service_name} (${protocol_type})";
|
*) log_write "${port_number} ${service_name} (${protocol_type})";
|
||||||
@@ -758,7 +726,7 @@ check_x_headers() {
|
|||||||
# Do it for the dashboard as well, as the header is different than above
|
# Do it for the dashboard as well, as the header is different than above
|
||||||
local dashboard
|
local dashboard
|
||||||
dashboard=$(curl -Is localhost/admin/ | awk '/X-Pi-hole/' | tr -d '\r')
|
dashboard=$(curl -Is localhost/admin/ | awk '/X-Pi-hole/' | tr -d '\r')
|
||||||
# Store what the X-Header shoud be in variables for comparison later
|
# Store what the X-Header shoud be in variables for comparision later
|
||||||
local block_page_working
|
local block_page_working
|
||||||
block_page_working="X-Pi-hole: A black hole for Internet advertisements."
|
block_page_working="X-Pi-hole: A black hole for Internet advertisements."
|
||||||
local dashboard_working
|
local dashboard_working
|
||||||
@@ -825,11 +793,11 @@ dig_at() {
|
|||||||
# This helps emulate queries to different domains that a user might query
|
# This helps emulate queries to different domains that a user might query
|
||||||
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
|
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
|
||||||
local random_url
|
local random_url
|
||||||
random_url=$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity ORDER BY RANDOM() LIMIT 1")
|
random_url=$(shuf -n 1 "${PIHOLE_BLOCKLIST_FILE}")
|
||||||
|
|
||||||
# First, do a dig on localhost to see if Pi-hole can use itself to block a domain
|
# First, do a dig on localhost to see if Pi-hole can use itself to block a domain
|
||||||
if local_dig=$(dig +tries=1 +time=2 -"${protocol}" "${random_url}" @${local_address} +short "${record_type}"); then
|
if local_dig=$(dig +tries=1 +time=2 -"${protocol}" "${random_url}" @${local_address} +short "${record_type}"); then
|
||||||
# If it can, show success
|
# If it can, show sucess
|
||||||
log_write "${TICK} ${random_url} ${COL_GREEN}is ${local_dig}${COL_NC} via ${COL_CYAN}localhost$COL_NC (${local_address})"
|
log_write "${TICK} ${random_url} ${COL_GREEN}is ${local_dig}${COL_NC} via ${COL_CYAN}localhost$COL_NC (${local_address})"
|
||||||
else
|
else
|
||||||
# Otherwise, show a failure
|
# Otherwise, show a failure
|
||||||
@@ -980,7 +948,7 @@ check_name_resolution() {
|
|||||||
# This function can check a directory exists
|
# This function can check a directory exists
|
||||||
# Pi-hole has files in several places, so we will reuse this function
|
# Pi-hole has files in several places, so we will reuse this function
|
||||||
dir_check() {
|
dir_check() {
|
||||||
# Set the first argument passed to this function as a named variable for better readability
|
# Set the first argument passed to tihs function as a named variable for better readability
|
||||||
local directory="${1}"
|
local directory="${1}"
|
||||||
# Display the current test that is running
|
# Display the current test that is running
|
||||||
echo_current_diagnostic "contents of ${COL_CYAN}${directory}${COL_NC}"
|
echo_current_diagnostic "contents of ${COL_CYAN}${directory}${COL_NC}"
|
||||||
@@ -998,16 +966,17 @@ dir_check() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
list_files_in_dir() {
|
list_files_in_dir() {
|
||||||
# Set the first argument passed to this function as a named variable for better readability
|
# Set the first argument passed to tihs function as a named variable for better readability
|
||||||
local dir_to_parse="${1}"
|
local dir_to_parse="${1}"
|
||||||
# Store the files found in an array
|
# Store the files found in an array
|
||||||
mapfile -t files_found < <(ls "${dir_to_parse}")
|
mapfile -t files_found < <(ls "${dir_to_parse}")
|
||||||
# For each file in the array,
|
# For each file in the array,
|
||||||
for each_file in "${files_found[@]}"; do
|
for each_file in "${files_found[@]}"; do
|
||||||
if [[ -d "${dir_to_parse}/${each_file}" ]]; then
|
if [[ -d "${dir_to_parse}/${each_file}" ]]; then
|
||||||
# If it's a directory, do nothing
|
# If it's a directoy, do nothing
|
||||||
:
|
:
|
||||||
elif [[ "${dir_to_parse}/${each_file}" == "${PIHOLE_DEBUG_LOG}" ]] || \
|
elif [[ "${dir_to_parse}/${each_file}" == "${PIHOLE_BLOCKLIST_FILE}" ]] || \
|
||||||
|
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_DEBUG_LOG}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_RAW_BLOCKLIST_FILES}" ]] || \
|
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_RAW_BLOCKLIST_FILES}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_INSTALL_LOG_FILE}" ]] || \
|
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_INSTALL_LOG_FILE}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_SETUP_VARS_FILE}" ]] || \
|
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_SETUP_VARS_FILE}" ]] || \
|
||||||
@@ -1092,71 +1061,31 @@ head_tail_log() {
|
|||||||
IFS="$OLD_IFS"
|
IFS="$OLD_IFS"
|
||||||
}
|
}
|
||||||
|
|
||||||
show_db_entries() {
|
|
||||||
local title="${1}"
|
|
||||||
local query="${2}"
|
|
||||||
local widths="${3}"
|
|
||||||
|
|
||||||
echo_current_diagnostic "${title}"
|
|
||||||
|
|
||||||
OLD_IFS="$IFS"
|
|
||||||
IFS=$'\r\n'
|
|
||||||
local entries=()
|
|
||||||
mapfile -t entries < <(\
|
|
||||||
sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" \
|
|
||||||
-cmd ".headers on" \
|
|
||||||
-cmd ".mode column" \
|
|
||||||
-cmd ".width ${widths}" \
|
|
||||||
"${query}"\
|
|
||||||
)
|
|
||||||
|
|
||||||
for line in "${entries[@]}"; do
|
|
||||||
log_write " ${line}"
|
|
||||||
done
|
|
||||||
|
|
||||||
IFS="$OLD_IFS"
|
|
||||||
}
|
|
||||||
|
|
||||||
show_groups() {
|
|
||||||
show_db_entries "Groups" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,name,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,description FROM \"group\"" "4 7 50 19 19 50"
|
|
||||||
}
|
|
||||||
|
|
||||||
show_adlists() {
|
|
||||||
show_db_entries "Adlists" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(adlist_by_group.group_id) group_ids,address,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM adlist LEFT JOIN adlist_by_group ON adlist.id = adlist_by_group.adlist_id GROUP BY id;" "4 7 12 100 19 19 50"
|
|
||||||
}
|
|
||||||
|
|
||||||
show_domainlist() {
|
|
||||||
show_db_entries "Domainlist (0/1 = exact white-/blacklist, 2/3 = regex white-/blacklist)" "SELECT id,CASE type WHEN '0' THEN '0 ' WHEN '1' THEN ' 1 ' WHEN '2' THEN ' 2 ' WHEN '3' THEN ' 3' ELSE type END type,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(domainlist_by_group.group_id) group_ids,domain,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM domainlist LEFT JOIN domainlist_by_group ON domainlist.id = domainlist_by_group.domainlist_id GROUP BY id;" "4 4 7 12 100 19 19 50"
|
|
||||||
}
|
|
||||||
|
|
||||||
show_clients() {
|
|
||||||
show_db_entries "Clients" "SELECT id,GROUP_CONCAT(client_by_group.group_id) group_ids,ip,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM client LEFT JOIN client_by_group ON client.id = client_by_group.client_id GROUP BY id;" "4 12 100 19 19 50"
|
|
||||||
}
|
|
||||||
|
|
||||||
analyze_gravity_list() {
|
analyze_gravity_list() {
|
||||||
echo_current_diagnostic "Gravity List and Database"
|
echo_current_diagnostic "Gravity list"
|
||||||
|
local head_line
|
||||||
local gravity_permissions
|
local tail_line
|
||||||
gravity_permissions=$(ls -ld "${PIHOLE_GRAVITY_DB_FILE}")
|
# Put the current Internal Field Separator into another variable so it can be restored later
|
||||||
log_write "${COL_GREEN}${gravity_permissions}${COL_NC}"
|
|
||||||
|
|
||||||
show_db_entries "Info table" "SELECT property,value FROM info" "20 40"
|
|
||||||
gravity_updated_raw="$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")"
|
|
||||||
gravity_updated="$(date -d @"${gravity_updated_raw}")"
|
|
||||||
log_write " Last gravity run finished at: ${COL_CYAN}${gravity_updated}${COL_NC}"
|
|
||||||
log_write ""
|
|
||||||
|
|
||||||
OLD_IFS="$IFS"
|
OLD_IFS="$IFS"
|
||||||
|
# Get the lines that are in the file(s) and store them in an array for parsing later
|
||||||
IFS=$'\r\n'
|
IFS=$'\r\n'
|
||||||
local gravity_sample=()
|
local gravity_permissions
|
||||||
mapfile -t gravity_sample < <(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10")
|
gravity_permissions=$(ls -ld "${PIHOLE_BLOCKLIST_FILE}")
|
||||||
log_write " ${COL_CYAN}----- First 10 Gravity Domains -----${COL_NC}"
|
log_write "${COL_GREEN}${gravity_permissions}${COL_NC}"
|
||||||
|
local gravity_head=()
|
||||||
for line in "${gravity_sample[@]}"; do
|
mapfile -t gravity_head < <(head -n 4 ${PIHOLE_BLOCKLIST_FILE})
|
||||||
log_write " ${line}"
|
log_write " ${COL_CYAN}-----head of $(basename ${PIHOLE_BLOCKLIST_FILE})------${COL_NC}"
|
||||||
|
for head_line in "${gravity_head[@]}"; do
|
||||||
|
log_write " ${head_line}"
|
||||||
done
|
done
|
||||||
|
|
||||||
log_write ""
|
log_write ""
|
||||||
|
local gravity_tail=()
|
||||||
|
mapfile -t gravity_tail < <(tail -n 4 ${PIHOLE_BLOCKLIST_FILE})
|
||||||
|
log_write " ${COL_CYAN}-----tail of $(basename ${PIHOLE_BLOCKLIST_FILE})------${COL_NC}"
|
||||||
|
for tail_line in "${gravity_tail[@]}"; do
|
||||||
|
log_write " ${tail_line}"
|
||||||
|
done
|
||||||
|
# Set the IFS back to what it was
|
||||||
IFS="$OLD_IFS"
|
IFS="$OLD_IFS"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1198,7 +1127,7 @@ analyze_pihole_log() {
|
|||||||
# So first check if there are domains in the log that should be obfuscated
|
# So first check if there are domains in the log that should be obfuscated
|
||||||
if [[ -n ${line_to_obfuscate} ]]; then
|
if [[ -n ${line_to_obfuscate} ]]; then
|
||||||
# If there are, we need to use awk to replace only the domain name (the 6th field in the log)
|
# If there are, we need to use awk to replace only the domain name (the 6th field in the log)
|
||||||
# so we substitute the domain for the placeholder value
|
# so we substitue the domain for the placeholder value
|
||||||
obfuscated_line=$(echo "${line_to_obfuscate}" | awk -v placeholder="${OBFUSCATED_PLACEHOLDER}" '{sub($6,placeholder); print $0}')
|
obfuscated_line=$(echo "${line_to_obfuscate}" | awk -v placeholder="${OBFUSCATED_PLACEHOLDER}" '{sub($6,placeholder); print $0}')
|
||||||
log_write " ${obfuscated_line}"
|
log_write " ${obfuscated_line}"
|
||||||
else
|
else
|
||||||
@@ -1220,11 +1149,6 @@ tricorder_use_nc_or_curl() {
|
|||||||
log_write " * Using ${COL_GREEN}curl${COL_NC} for transmission."
|
log_write " * Using ${COL_GREEN}curl${COL_NC} for transmission."
|
||||||
# transmit he log via TLS and store the token returned in a variable
|
# transmit he log via TLS and store the token returned in a variable
|
||||||
tricorder_token=$(curl --silent --upload-file ${PIHOLE_DEBUG_LOG} https://tricorder.pi-hole.net:${TRICORDER_SSL_PORT_NUMBER})
|
tricorder_token=$(curl --silent --upload-file ${PIHOLE_DEBUG_LOG} https://tricorder.pi-hole.net:${TRICORDER_SSL_PORT_NUMBER})
|
||||||
if [ -z "${tricorder_token}" ]; then
|
|
||||||
# curl failed, fallback to nc
|
|
||||||
log_write " * ${COL_GREEN}curl${COL_NC} failed, falling back to ${COL_YELLOW}netcat${COL_NC} for transmission."
|
|
||||||
tricorder_token=$(< ${PIHOLE_DEBUG_LOG} nc tricorder.pi-hole.net ${TRICORDER_NC_PORT_NUMBER})
|
|
||||||
fi
|
|
||||||
# Otherwise,
|
# Otherwise,
|
||||||
else
|
else
|
||||||
# use net cat
|
# use net cat
|
||||||
@@ -1251,7 +1175,7 @@ upload_to_tricorder() {
|
|||||||
log_write " * The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only."
|
log_write " * The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only."
|
||||||
log_write " * For more information, see: ${TRICORDER_CONTEST}"
|
log_write " * For more information, see: ${TRICORDER_CONTEST}"
|
||||||
log_write " * If available, we'll use openssl to upload the log, otherwise it will fall back to netcat."
|
log_write " * If available, we'll use openssl to upload the log, otherwise it will fall back to netcat."
|
||||||
# If pihole -d is running automatically (usually through the dashboard)
|
# If pihole -d is running automatically (usually throught the dashboard)
|
||||||
if [[ "${AUTOMATED}" ]]; then
|
if [[ "${AUTOMATED}" ]]; then
|
||||||
# let the user know
|
# let the user know
|
||||||
log_write "${INFO} Debug script running in automated mode"
|
log_write "${INFO} Debug script running in automated mode"
|
||||||
@@ -1267,7 +1191,7 @@ upload_to_tricorder() {
|
|||||||
# If they say yes, run our function for uploading the log
|
# If they say yes, run our function for uploading the log
|
||||||
[yY][eE][sS]|[yY]) tricorder_use_nc_or_curl;;
|
[yY][eE][sS]|[yY]) tricorder_use_nc_or_curl;;
|
||||||
# If they choose no, just exit out of the script
|
# If they choose no, just exit out of the script
|
||||||
*) log_write " * Log will ${COL_GREEN}NOT${COL_NC} be uploaded to tricorder.\\n * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n";exit;
|
*) log_write " * Log will ${COL_GREEN}NOT${COL_NC} be uploaded to tricorder.";exit;
|
||||||
esac
|
esac
|
||||||
fi
|
fi
|
||||||
# Check if tricorder.pi-hole.net is reachable and provide token
|
# Check if tricorder.pi-hole.net is reachable and provide token
|
||||||
@@ -1312,10 +1236,6 @@ process_status
|
|||||||
parse_setup_vars
|
parse_setup_vars
|
||||||
check_x_headers
|
check_x_headers
|
||||||
analyze_gravity_list
|
analyze_gravity_list
|
||||||
show_groups
|
|
||||||
show_domainlist
|
|
||||||
show_clients
|
|
||||||
show_adlists
|
|
||||||
show_content_of_pihole_files
|
show_content_of_pihole_files
|
||||||
parse_locale
|
parse_locale
|
||||||
analyze_pihole_log
|
analyze_pihole_log
|
||||||
|
@@ -26,7 +26,7 @@ if [ -z "$DBFILE" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "$@" != *"quiet"* ]]; then
|
if [[ "$@" != *"quiet"* ]]; then
|
||||||
echo -ne " ${INFO} Flushing /var/log/pihole/pihole.log ..."
|
echo -ne " ${INFO} Flushing /var/log/pihole.log ..."
|
||||||
fi
|
fi
|
||||||
if [[ "$@" == *"once"* ]]; then
|
if [[ "$@" == *"once"* ]]; then
|
||||||
# Nightly logrotation
|
# Nightly logrotation
|
||||||
@@ -39,9 +39,8 @@ if [[ "$@" == *"once"* ]]; then
|
|||||||
# Note that moving the file is not an option, as
|
# Note that moving the file is not an option, as
|
||||||
# dnsmasq would happily continue writing into the
|
# dnsmasq would happily continue writing into the
|
||||||
# moved file (it will have the same file handler)
|
# moved file (it will have the same file handler)
|
||||||
cp -p /var/log/pihole/pihole.log /var/log/pihole/pihole.log.1
|
cp /var/log/pihole.log /var/log/pihole.log.1
|
||||||
echo " " > /var/log/pihole/pihole.log
|
echo " " > /var/log/pihole.log
|
||||||
chmod 644 /var/log/pihole/pihole.log
|
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Manual flushing
|
# Manual flushing
|
||||||
@@ -51,10 +50,9 @@ else
|
|||||||
/usr/sbin/logrotate --force /etc/pihole/logrotate
|
/usr/sbin/logrotate --force /etc/pihole/logrotate
|
||||||
else
|
else
|
||||||
# Flush both pihole.log and pihole.log.1 (if existing)
|
# Flush both pihole.log and pihole.log.1 (if existing)
|
||||||
echo " " > /var/log/pihole/pihole.log
|
echo " " > /var/log/pihole.log
|
||||||
if [ -f /var/log/pihole/pihole.log.1 ]; then
|
if [ -f /var/log/pihole.log.1 ]; then
|
||||||
echo " " > /var/log/pihole/pihole.log.1
|
echo " " > /var/log/pihole.log.1
|
||||||
chmod 644 /var/log/pihole/pihole.log.1
|
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
# Delete most recent 24 hours from FTL's database, leave even older data intact (don't wipe out all history)
|
# Delete most recent 24 hours from FTL's database, leave even older data intact (don't wipe out all history)
|
||||||
@@ -65,6 +63,6 @@ else
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "$@" != *"quiet"* ]]; then
|
if [[ "$@" != *"quiet"* ]]; then
|
||||||
echo -e "${OVER} ${TICK} Flushed /var/log/pihole/pihole.log"
|
echo -e "${OVER} ${TICK} Flushed /var/log/pihole.log"
|
||||||
echo -e " ${TICK} Deleted ${deleted} queries from database"
|
echo -e " ${TICK} Deleted ${deleted} queries from database"
|
||||||
fi
|
fi
|
||||||
|
229
advanced/Scripts/query.sh
Executable file → Normal file
229
advanced/Scripts/query.sh
Executable file → Normal file
@@ -11,8 +11,10 @@
|
|||||||
|
|
||||||
# Globals
|
# Globals
|
||||||
piholeDir="/etc/pihole"
|
piholeDir="/etc/pihole"
|
||||||
gravityDBfile="${piholeDir}/gravity.db"
|
adListsList="$piholeDir/adlists.list"
|
||||||
|
wildcardlist="/etc/dnsmasq.d/03-pihole-wildcard.conf"
|
||||||
options="$*"
|
options="$*"
|
||||||
|
adlist=""
|
||||||
all=""
|
all=""
|
||||||
exact=""
|
exact=""
|
||||||
blockpage=""
|
blockpage=""
|
||||||
@@ -21,30 +23,40 @@ matchType="match"
|
|||||||
colfile="/opt/pihole/COL_TABLE"
|
colfile="/opt/pihole/COL_TABLE"
|
||||||
source "${colfile}"
|
source "${colfile}"
|
||||||
|
|
||||||
|
# Print each subdomain
|
||||||
|
# e.g: foo.bar.baz.com = "foo.bar.baz.com bar.baz.com baz.com com"
|
||||||
|
processWildcards() {
|
||||||
|
IFS="." read -r -a array <<< "${1}"
|
||||||
|
for (( i=${#array[@]}-1; i>=0; i-- )); do
|
||||||
|
ar=""
|
||||||
|
for (( j=${#array[@]}-1; j>${#array[@]}-i-2; j-- )); do
|
||||||
|
if [[ $j == $((${#array[@]}-1)) ]]; then
|
||||||
|
ar="${array[$j]}"
|
||||||
|
else
|
||||||
|
ar="${array[$j]}.${ar}"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
echo "${ar}"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
# Scan an array of files for matching strings
|
# Scan an array of files for matching strings
|
||||||
scanList(){
|
scanList(){
|
||||||
# Escape full stops
|
# Escape full stops
|
||||||
local domain="${1}" esc_domain="${1//./\\.}" lists="${2}" type="${3:-}"
|
local domain="${1//./\\.}" lists="${2}" type="${3:-}"
|
||||||
|
|
||||||
# Prevent grep from printing file path
|
# Prevent grep from printing file path
|
||||||
cd "$piholeDir" || exit 1
|
cd "$piholeDir" || exit 1
|
||||||
|
|
||||||
# Prevent grep -i matching slowly: https://bit.ly/2xFXtUX
|
# Prevent grep -i matching slowly: http://bit.ly/2xFXtUX
|
||||||
export LC_CTYPE=C
|
export LC_CTYPE=C
|
||||||
|
|
||||||
# /dev/null forces filename to be printed when only one list has been generated
|
# /dev/null forces filename to be printed when only one list has been generated
|
||||||
|
# shellcheck disable=SC2086
|
||||||
case "${type}" in
|
case "${type}" in
|
||||||
"exact" ) grep -i -E -l "(^|(?<!#)\\s)${esc_domain}($|\\s|#)" ${lists} /dev/null 2>/dev/null;;
|
"exact" ) grep -i -E -l "(^|(?<!#)\\s)${domain}($|\\s|#)" ${lists} /dev/null 2>/dev/null;;
|
||||||
# Iterate through each regexp and check whether it matches the domainQuery
|
"wc" ) grep -i -o -m 1 "/${domain}/" ${lists} 2>/dev/null;;
|
||||||
# If it does, print the matching regexp and continue looping
|
* ) grep -i "${domain}" ${lists} /dev/null 2>/dev/null;;
|
||||||
# Input 1 - regexps | Input 2 - domainQuery
|
|
||||||
"regex" )
|
|
||||||
for list in ${lists}; do
|
|
||||||
if [[ "${domain}" =~ ${list} ]]; then
|
|
||||||
printf "%b\n" "${list}";
|
|
||||||
fi
|
|
||||||
done;;
|
|
||||||
* ) grep -i "${esc_domain}" ${lists} /dev/null 2>/dev/null;;
|
|
||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -54,16 +66,23 @@ Example: 'pihole -q -exact domain.com'
|
|||||||
Query the adlists for a specified domain
|
Query the adlists for a specified domain
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
|
-adlist Print the name of the block list URL
|
||||||
-exact Search the block lists for exact domain matches
|
-exact Search the block lists for exact domain matches
|
||||||
-all Return all query matches within a block list
|
-all Return all query matches within a block list
|
||||||
-h, --help Show this help dialog"
|
-h, --help Show this help dialog"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [[ ! -e "$adListsList" ]]; then
|
||||||
|
echo -e "${COL_LIGHT_RED}The file $adListsList was not found${COL_NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
# Handle valid options
|
# Handle valid options
|
||||||
if [[ "${options}" == *"-bp"* ]]; then
|
if [[ "${options}" == *"-bp"* ]]; then
|
||||||
exact="exact"; blockpage=true
|
exact="exact"; blockpage=true
|
||||||
else
|
else
|
||||||
|
[[ "${options}" == *"-adlist"* ]] && adlist=true
|
||||||
[[ "${options}" == *"-all"* ]] && all=true
|
[[ "${options}" == *"-all"* ]] && all=true
|
||||||
if [[ "${options}" == *"-exact"* ]]; then
|
if [[ "${options}" == *"-exact"* ]]; then
|
||||||
exact="exact"; matchType="exact ${matchType}"
|
exact="exact"; matchType="exact ${matchType}"
|
||||||
@@ -88,115 +107,55 @@ if [[ -n "${str:-}" ]]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
scanDatabaseTable() {
|
# Scan Whitelist and Blacklist
|
||||||
local domain table type querystr result extra
|
lists="whitelist.txt blacklist.txt"
|
||||||
domain="$(printf "%q" "${1}")"
|
mapfile -t results <<< "$(scanList "${domainQuery}" "${lists}" "${exact}")"
|
||||||
table="${2}"
|
if [[ -n "${results[*]}" ]]; then
|
||||||
type="${3:-}"
|
|
||||||
|
|
||||||
# As underscores are legitimate parts of domains, we escape them when using the LIKE operator.
|
|
||||||
# Underscores are SQLite wildcards matching exactly one character. We obviously want to suppress this
|
|
||||||
# behavior. The "ESCAPE '\'" clause specifies that an underscore preceded by an '\' should be matched
|
|
||||||
# as a literal underscore character. We pretreat the $domain variable accordingly to escape underscores.
|
|
||||||
if [[ "${table}" == "gravity" ]]; then
|
|
||||||
case "${exact}" in
|
|
||||||
"exact" ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain = '${domain}'";;
|
|
||||||
* ) querystr="SELECT gravity.domain,adlist.address,adlist.enabled FROM gravity LEFT JOIN adlist ON adlist.id = gravity.adlist_id WHERE domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
case "${exact}" in
|
|
||||||
"exact" ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain = '${domain}'";;
|
|
||||||
* ) querystr="SELECT domain,enabled FROM domainlist WHERE type = '${type}' AND domain LIKE '%${domain//_/\\_}%' ESCAPE '\\'";;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Send prepared query to gravity database
|
|
||||||
result="$(sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null
|
|
||||||
if [[ -z "${result}" ]]; then
|
|
||||||
# Return early when there are no matches in this table
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${table}" == "gravity" ]]; then
|
|
||||||
echo "${result}"
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Mark domain as having been white-/blacklist matched (global variable)
|
|
||||||
wbMatch=true
|
wbMatch=true
|
||||||
|
# Loop through each result in order to print unique file title once
|
||||||
# Print table name
|
|
||||||
if [[ -z "${blockpage}" ]]; then
|
|
||||||
echo " ${matchType^} found in ${COL_BOLD}exact ${table}${COL_NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Loop over results and print them
|
|
||||||
mapfile -t results <<< "${result}"
|
|
||||||
for result in "${results[@]}"; do
|
for result in "${results[@]}"; do
|
||||||
|
fileName="${result%%.*}"
|
||||||
if [[ -n "${blockpage}" ]]; then
|
if [[ -n "${blockpage}" ]]; then
|
||||||
echo "π ${result}"
|
echo "π ${result}"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
elif [[ -n "${exact}" ]]; then
|
||||||
domain="${result/|*}"
|
echo " ${matchType^} found in ${COL_BOLD}${fileName^}${COL_NC}"
|
||||||
if [[ "${result#*|}" == "0" ]]; then
|
|
||||||
extra=" (disabled)"
|
|
||||||
else
|
else
|
||||||
extra=""
|
# Only print filename title once per file
|
||||||
|
if [[ ! "${fileName}" == "${fileName_prev:-}" ]]; then
|
||||||
|
echo " ${matchType^} found in ${COL_BOLD}${fileName^}${COL_NC}"
|
||||||
|
fileName_prev="${fileName}"
|
||||||
|
fi
|
||||||
|
echo " ${result#*:}"
|
||||||
fi
|
fi
|
||||||
echo " ${domain}${extra}"
|
|
||||||
done
|
done
|
||||||
}
|
fi
|
||||||
|
|
||||||
scanRegexDatabaseTable() {
|
# Scan Wildcards
|
||||||
local domain list
|
if [[ -e "${wildcardlist}" ]]; then
|
||||||
domain="${1}"
|
# Determine all subdomains, domain and TLDs
|
||||||
list="${2}"
|
mapfile -t wildcards <<< "$(processWildcards "${domainQuery}")"
|
||||||
type="${3:-}"
|
for match in "${wildcards[@]}"; do
|
||||||
|
# Search wildcard list for matches
|
||||||
# Query all regex from the corresponding database tables
|
mapfile -t results <<< "$(scanList "${match}" "${wildcardlist}" "wc")"
|
||||||
mapfile -t regexList < <(sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null)
|
if [[ -n "${results[*]}" ]]; then
|
||||||
|
if [[ -z "${wcMatch:-}" ]] && [[ -z "${blockpage}" ]]; then
|
||||||
# If we have regexps to process
|
|
||||||
if [[ "${#regexList[@]}" -ne 0 ]]; then
|
|
||||||
# Split regexps over a new line
|
|
||||||
str_regexList=$(printf '%s\n' "${regexList[@]}")
|
|
||||||
# Check domain against regexps
|
|
||||||
mapfile -t regexMatches < <(scanList "${domain}" "${str_regexList}" "regex")
|
|
||||||
# If there were regex matches
|
|
||||||
if [[ "${#regexMatches[@]}" -ne 0 ]]; then
|
|
||||||
# Split matching regexps over a new line
|
|
||||||
str_regexMatches=$(printf '%s\n' "${regexMatches[@]}")
|
|
||||||
# Form a "matched" message
|
|
||||||
str_message="${matchType^} found in ${COL_BOLD}regex ${list}${COL_NC}"
|
|
||||||
# Form a "results" message
|
|
||||||
str_result="${COL_BOLD}${str_regexMatches}${COL_NC}"
|
|
||||||
# If we are displaying more than just the source of the block
|
|
||||||
if [[ -z "${blockpage}" ]]; then
|
|
||||||
# Set the wildcard match flag
|
|
||||||
wcMatch=true
|
wcMatch=true
|
||||||
# Echo the "matched" message, indented by one space
|
echo " ${matchType^} found in ${COL_BOLD}Wildcards${COL_NC}:"
|
||||||
echo " ${str_message}"
|
|
||||||
# Echo the "results" message, each line indented by three spaces
|
|
||||||
# shellcheck disable=SC2001
|
|
||||||
echo "${str_result}" | sed 's/^/ /'
|
|
||||||
else
|
|
||||||
echo "π .wildcard"
|
|
||||||
exit 0
|
|
||||||
fi
|
fi
|
||||||
|
case "${blockpage}" in
|
||||||
|
true ) echo "π ${wildcardlist##*/}"; exit 0;;
|
||||||
|
* ) echo " *.${match}";;
|
||||||
|
esac
|
||||||
fi
|
fi
|
||||||
|
done
|
||||||
fi
|
fi
|
||||||
}
|
|
||||||
|
|
||||||
# Scan Whitelist and Blacklist
|
# Get version sorted *.domains filenames (without dir path)
|
||||||
scanDatabaseTable "${domainQuery}" "whitelist" "0"
|
lists=("$(cd "$piholeDir" || exit 0; printf "%s\\n" -- *.domains | sort -V)")
|
||||||
scanDatabaseTable "${domainQuery}" "blacklist" "1"
|
|
||||||
|
|
||||||
# Scan Regex table
|
# Query blocklists for occurences of domain
|
||||||
scanRegexDatabaseTable "${domainQuery}" "whitelist" "2"
|
mapfile -t results <<< "$(scanList "${domainQuery}" "${lists[*]}" "${exact}")"
|
||||||
scanRegexDatabaseTable "${domainQuery}" "blacklist" "3"
|
|
||||||
|
|
||||||
# Query block lists
|
|
||||||
mapfile -t results <<< "$(scanDatabaseTable "${domainQuery}" "gravity")"
|
|
||||||
|
|
||||||
# Handle notices
|
# Handle notices
|
||||||
if [[ -z "${wbMatch:-}" ]] && [[ -z "${wcMatch:-}" ]] && [[ -z "${results[*]}" ]]; then
|
if [[ -z "${wbMatch:-}" ]] && [[ -z "${wcMatch:-}" ]] && [[ -z "${results[*]}" ]]; then
|
||||||
@@ -211,6 +170,29 @@ elif [[ -z "${all}" ]] && [[ "${#results[*]}" -ge 100 ]]; then
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Remove unwanted content from non-exact $results
|
||||||
|
if [[ -z "${exact}" ]]; then
|
||||||
|
# Delete lines starting with #
|
||||||
|
# Remove comments after domain
|
||||||
|
# Remove hosts format IP address
|
||||||
|
mapfile -t results <<< "$(IFS=$'\n'; sed \
|
||||||
|
-e "/:#/d" \
|
||||||
|
-e "s/[ \\t]#.*//g" \
|
||||||
|
-e "s/:.*[ \\t]/:/g" \
|
||||||
|
<<< "${results[*]}")"
|
||||||
|
# Exit if result was in a comment
|
||||||
|
[[ -z "${results[*]}" ]] && exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get adlist file content as array
|
||||||
|
if [[ -n "${adlist}" ]] || [[ -n "${blockpage}" ]]; then
|
||||||
|
for adlistUrl in $(< "${adListsList}"); do
|
||||||
|
if [[ "${adlistUrl:0:4}" =~ (http|www.) ]]; then
|
||||||
|
adlists+=("${adlistUrl}")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
# Print "Exact matches for" title
|
# Print "Exact matches for" title
|
||||||
if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then
|
if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then
|
||||||
plural=""; [[ "${#results[*]}" -gt 1 ]] && plural="es"
|
plural=""; [[ "${#results[*]}" -gt 1 ]] && plural="es"
|
||||||
@@ -218,25 +200,28 @@ if [[ -n "${exact}" ]] && [[ -z "${blockpage}" ]]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
for result in "${results[@]}"; do
|
for result in "${results[@]}"; do
|
||||||
match="${result/|*/}"
|
fileName="${result/:*/}"
|
||||||
extra="${result#*|}"
|
|
||||||
adlistAddress="${extra/|*/}"
|
# Determine *.domains URL using filename's number
|
||||||
extra="${extra#*|}"
|
if [[ -n "${adlist}" ]] || [[ -n "${blockpage}" ]]; then
|
||||||
if [[ "${extra}" == "0" ]]; then
|
fileNum="${fileName/list./}"; fileNum="${fileNum%%.*}"
|
||||||
extra="(disabled)"
|
fileName="${adlists[$fileNum]}"
|
||||||
else
|
|
||||||
extra=""
|
# Discrepency occurs when adlists has been modified, but Gravity has not been run
|
||||||
|
if [[ -z "${fileName}" ]]; then
|
||||||
|
fileName="${COL_LIGHT_RED}(no associated adlists URL found)${COL_NC}"
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "${blockpage}" ]]; then
|
if [[ -n "${blockpage}" ]]; then
|
||||||
echo "0 ${adlistAddress}"
|
echo "${fileNum} ${fileName}"
|
||||||
elif [[ -n "${exact}" ]]; then
|
elif [[ -n "${exact}" ]]; then
|
||||||
echo " - ${adlistAddress} ${extra}"
|
echo " ${fileName}"
|
||||||
else
|
else
|
||||||
if [[ ! "${adlistAddress}" == "${adlistAddress_prev:-}" ]]; then
|
if [[ ! "${fileName}" == "${fileName_prev:-}" ]]; then
|
||||||
count=""
|
count=""
|
||||||
echo " ${matchType^} found in ${COL_BOLD}${adlistAddress}${COL_NC}:"
|
echo " ${matchType^} found in ${COL_BOLD}${fileName}${COL_NC}:"
|
||||||
adlistAddress_prev="${adlistAddress}"
|
fileName_prev="${fileName}"
|
||||||
fi
|
fi
|
||||||
: $((count++))
|
: $((count++))
|
||||||
|
|
||||||
@@ -246,7 +231,7 @@ for result in "${results[@]}"; do
|
|||||||
[[ "${count}" -gt "${max_count}" ]] && continue
|
[[ "${count}" -gt "${max_count}" ]] && continue
|
||||||
echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}"
|
echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}"
|
||||||
else
|
else
|
||||||
echo " ${match} ${extra}"
|
echo " ${result#*:}"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
@@ -20,7 +20,7 @@ getInitSys() {
|
|||||||
elif [ -f /etc/init.d/cron ] && [ ! -h /etc/init.d/cron ]; then
|
elif [ -f /etc/init.d/cron ] && [ ! -h /etc/init.d/cron ]; then
|
||||||
SYSTEMD=0
|
SYSTEMD=0
|
||||||
else
|
else
|
||||||
echo "Unrecognized init system"
|
echo "Unrecognised init system"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -70,5 +70,5 @@ setupcon
|
|||||||
reboot
|
reboot
|
||||||
|
|
||||||
# Start showing the stats on the screen by running the command on another tty:
|
# Start showing the stats on the screen by running the command on another tty:
|
||||||
# https://unix.stackexchange.com/questions/170063/start-a-process-on-a-different-tty
|
# http://unix.stackexchange.com/questions/170063/start-a-process-on-a-different-tty
|
||||||
#setsid sh -c 'exec /usr/local/bin/chronometer.sh <> /dev/tty1 >&0 2>&1'
|
#setsid sh -c 'exec /usr/local/bin/chronometer.sh <> /dev/tty1 >&0 2>&1'
|
||||||
|
@@ -31,6 +31,7 @@ source "/opt/pihole/COL_TABLE"
|
|||||||
# make_repo() sourced from basic-install.sh
|
# make_repo() sourced from basic-install.sh
|
||||||
# update_repo() source from basic-install.sh
|
# update_repo() source from basic-install.sh
|
||||||
# getGitFiles() sourced from basic-install.sh
|
# getGitFiles() sourced from basic-install.sh
|
||||||
|
# get_binary_name() sourced from basic-install.sh
|
||||||
# FTLcheckUpdate() sourced from basic-install.sh
|
# FTLcheckUpdate() sourced from basic-install.sh
|
||||||
|
|
||||||
GitCheckUpdateAvail() {
|
GitCheckUpdateAvail() {
|
||||||
@@ -128,12 +129,7 @@ main() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local funcOutput
|
if FTLcheckUpdate > /dev/null; then
|
||||||
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
|
||||||
local binary
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
|
||||||
|
|
||||||
if FTLcheckUpdate "${binary}" > /dev/null; then
|
|
||||||
FTL_update=true
|
FTL_update=true
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}"
|
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}"
|
||||||
else
|
else
|
||||||
@@ -198,14 +194,6 @@ main() {
|
|||||||
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \
|
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --reconfigure --unattended || \
|
||||||
echo -e "${basicError}" && exit 1
|
echo -e "${basicError}" && exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${FTL_update}" == true || "${core_update}" == true || "${web_update}" == true ]]; then
|
|
||||||
# Force an update of the updatechecker
|
|
||||||
/opt/pihole/updatecheck.sh
|
|
||||||
/opt/pihole/updatecheck.sh x remote
|
|
||||||
echo -e " ${INFO} Local version file information updated."
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
echo ""
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
@@ -51,7 +51,6 @@ if [[ "$2" == "remote" ]]; then
|
|||||||
|
|
||||||
GITHUB_CORE_VERSION="$(json_extract tag_name "$(curl -s 'https://api.github.com/repos/pi-hole/pi-hole/releases/latest' 2> /dev/null)")"
|
GITHUB_CORE_VERSION="$(json_extract tag_name "$(curl -s 'https://api.github.com/repos/pi-hole/pi-hole/releases/latest' 2> /dev/null)")"
|
||||||
echo -n "${GITHUB_CORE_VERSION}" > "${GITHUB_VERSION_FILE}"
|
echo -n "${GITHUB_CORE_VERSION}" > "${GITHUB_VERSION_FILE}"
|
||||||
chmod 644 "${GITHUB_VERSION_FILE}"
|
|
||||||
|
|
||||||
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
||||||
GITHUB_WEB_VERSION="$(json_extract tag_name "$(curl -s 'https://api.github.com/repos/pi-hole/AdminLTE/releases/latest' 2> /dev/null)")"
|
GITHUB_WEB_VERSION="$(json_extract tag_name "$(curl -s 'https://api.github.com/repos/pi-hole/AdminLTE/releases/latest' 2> /dev/null)")"
|
||||||
@@ -67,7 +66,6 @@ else
|
|||||||
|
|
||||||
CORE_BRANCH="$(get_local_branch /etc/.pihole)"
|
CORE_BRANCH="$(get_local_branch /etc/.pihole)"
|
||||||
echo -n "${CORE_BRANCH}" > "${LOCAL_BRANCH_FILE}"
|
echo -n "${CORE_BRANCH}" > "${LOCAL_BRANCH_FILE}"
|
||||||
chmod 644 "${LOCAL_BRANCH_FILE}"
|
|
||||||
|
|
||||||
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
||||||
WEB_BRANCH="$(get_local_branch /var/www/html/admin)"
|
WEB_BRANCH="$(get_local_branch /var/www/html/admin)"
|
||||||
@@ -81,7 +79,6 @@ else
|
|||||||
|
|
||||||
CORE_VERSION="$(get_local_version /etc/.pihole)"
|
CORE_VERSION="$(get_local_version /etc/.pihole)"
|
||||||
echo -n "${CORE_VERSION}" > "${LOCAL_VERSION_FILE}"
|
echo -n "${CORE_VERSION}" > "${LOCAL_VERSION_FILE}"
|
||||||
chmod 644 "${LOCAL_VERSION_FILE}"
|
|
||||||
|
|
||||||
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then
|
||||||
WEB_VERSION="$(get_local_version /var/www/html/admin)"
|
WEB_VERSION="$(get_local_version /var/www/html/admin)"
|
||||||
|
@@ -84,21 +84,6 @@ getRemoteVersion(){
|
|||||||
# Get the version from the remote origin
|
# Get the version from the remote origin
|
||||||
local daemon="${1}"
|
local daemon="${1}"
|
||||||
local version
|
local version
|
||||||
local cachedVersions
|
|
||||||
local arrCache
|
|
||||||
cachedVersions="/etc/pihole/GitHubVersions"
|
|
||||||
|
|
||||||
#If the above file exists, then we can read from that. Prevents overuse of GitHub API
|
|
||||||
if [[ -f "$cachedVersions" ]]; then
|
|
||||||
IFS=' ' read -r -a arrCache < "$cachedVersions"
|
|
||||||
case $daemon in
|
|
||||||
"pi-hole" ) echo "${arrCache[0]}";;
|
|
||||||
"AdminLTE" ) echo "${arrCache[1]}";;
|
|
||||||
"FTL" ) echo "${arrCache[2]}";;
|
|
||||||
esac
|
|
||||||
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
version=$(curl --silent --fail "https://api.github.com/repos/pi-hole/${daemon}/releases/latest" | \
|
version=$(curl --silent --fail "https://api.github.com/repos/pi-hole/${daemon}/releases/latest" | \
|
||||||
awk -F: '$1 ~/tag_name/ { print $2 }' | \
|
awk -F: '$1 ~/tag_name/ { print $2 }' | \
|
||||||
@@ -112,48 +97,22 @@ getRemoteVersion(){
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
getLocalBranch(){
|
|
||||||
# Get the checked out branch of the local directory
|
|
||||||
local directory="${1}"
|
|
||||||
local branch
|
|
||||||
|
|
||||||
# Local FTL btranch is stored in /etc/pihole/ftlbranch
|
|
||||||
if [[ "$1" == "FTL" ]]; then
|
|
||||||
branch="$(pihole-FTL branch)"
|
|
||||||
else
|
|
||||||
cd "${directory}" 2> /dev/null || { echo "${DEFAULT}"; return 1; }
|
|
||||||
branch=$(git rev-parse --abbrev-ref HEAD || echo "$DEFAULT")
|
|
||||||
fi
|
|
||||||
if [[ ! "${branch}" =~ ^v ]]; then
|
|
||||||
if [[ "${branch}" == "master" ]]; then
|
|
||||||
echo ""
|
|
||||||
elif [[ "${branch}" == "HEAD" ]]; then
|
|
||||||
echo "in detached HEAD state at "
|
|
||||||
else
|
|
||||||
echo "${branch} "
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Branch started in "v"
|
|
||||||
echo "release "
|
|
||||||
fi
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
versionOutput() {
|
versionOutput() {
|
||||||
[[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR
|
[[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR
|
||||||
[[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR
|
[[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR
|
||||||
[[ "$1" == "FTL" ]] && GITDIR="FTL"
|
[[ "$1" == "FTL" ]] && GITDIR="FTL"
|
||||||
|
|
||||||
[[ "$2" == "-c" ]] || [[ "$2" == "--current" ]] || [[ -z "$2" ]] && current=$(getLocalVersion $GITDIR) && branch=$(getLocalBranch $GITDIR)
|
[[ "$2" == "-c" ]] || [[ "$2" == "--current" ]] || [[ -z "$2" ]] && current=$(getLocalVersion $GITDIR)
|
||||||
[[ "$2" == "-l" ]] || [[ "$2" == "--latest" ]] || [[ -z "$2" ]] && latest=$(getRemoteVersion "$1")
|
[[ "$2" == "-l" ]] || [[ "$2" == "--latest" ]] || [[ -z "$2" ]] && latest=$(getRemoteVersion "$1")
|
||||||
if [[ "$2" == "-h" ]] || [[ "$2" == "--hash" ]]; then
|
if [[ "$2" == "-h" ]] || [[ "$2" == "--hash" ]]; then
|
||||||
[[ "$3" == "-c" ]] || [[ "$3" == "--current" ]] || [[ -z "$3" ]] && curHash=$(getLocalHash "$GITDIR") && branch=$(getLocalBranch $GITDIR)
|
[[ "$3" == "-c" ]] || [[ "$3" == "--current" ]] || [[ -z "$3" ]] && curHash=$(getLocalHash "$GITDIR")
|
||||||
[[ "$3" == "-l" ]] || [[ "$3" == "--latest" ]] || [[ -z "$3" ]] && latHash=$(getRemoteHash "$1" "$(cd "$GITDIR" 2> /dev/null && git rev-parse --abbrev-ref HEAD)")
|
[[ "$3" == "-l" ]] || [[ "$3" == "--latest" ]] || [[ -z "$3" ]] && latHash=$(getRemoteHash "$1" "$(cd "$GITDIR" 2> /dev/null && git rev-parse --abbrev-ref HEAD)")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "$current" ]] && [[ -n "$latest" ]]; then
|
if [[ -n "$current" ]] && [[ -n "$latest" ]]; then
|
||||||
output="${1^} version is $branch$current (Latest: $latest)"
|
output="${1^} version is $current (Latest: $latest)"
|
||||||
elif [[ -n "$current" ]] && [[ -z "$latest" ]]; then
|
elif [[ -n "$current" ]] && [[ -z "$latest" ]]; then
|
||||||
output="Current ${1^} version is $branch$current."
|
output="Current ${1^} version is $current"
|
||||||
elif [[ -z "$current" ]] && [[ -n "$latest" ]]; then
|
elif [[ -z "$current" ]] && [[ -n "$latest" ]]; then
|
||||||
output="Latest ${1^} version is $latest"
|
output="Latest ${1^} version is $latest"
|
||||||
elif [[ "$curHash" == "N/A" ]] || [[ "$latHash" == "N/A" ]]; then
|
elif [[ "$curHash" == "N/A" ]] || [[ "$latHash" == "N/A" ]]; then
|
||||||
@@ -203,7 +162,7 @@ Repositories:
|
|||||||
Options:
|
Options:
|
||||||
-c, --current Return the current version
|
-c, --current Return the current version
|
||||||
-l, --latest Return the latest version
|
-l, --latest Return the latest version
|
||||||
--hash Return the GitHub hash from your local repositories
|
--hash Return the Github hash from your local repositories
|
||||||
-h, --help Show this help dialog"
|
-h, --help Show this help dialog"
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
@@ -10,21 +10,12 @@
|
|||||||
# This file is copyright under the latest version of the EUPL.
|
# This file is copyright under the latest version of the EUPL.
|
||||||
# Please see LICENSE file for your rights under this license.
|
# Please see LICENSE file for your rights under this license.
|
||||||
|
|
||||||
|
readonly setupVars="/etc/pihole/setupVars.conf"
|
||||||
readonly dnsmasqconfig="/etc/dnsmasq.d/01-pihole.conf"
|
readonly dnsmasqconfig="/etc/dnsmasq.d/01-pihole.conf"
|
||||||
readonly dhcpconfig="/etc/dnsmasq.d/02-pihole-dhcp.conf"
|
readonly dhcpconfig="/etc/dnsmasq.d/02-pihole-dhcp.conf"
|
||||||
readonly FTLconf="/etc/pihole/pihole-FTL.conf"
|
readonly FTLconf="/etc/pihole/pihole-FTL.conf"
|
||||||
# 03 -> wildcards
|
# 03 -> wildcards
|
||||||
readonly dhcpstaticconfig="/etc/dnsmasq.d/04-pihole-static-dhcp.conf"
|
readonly dhcpstaticconfig="/etc/dnsmasq.d/04-pihole-static-dhcp.conf"
|
||||||
readonly dnscustomfile="/etc/pihole/custom.list"
|
|
||||||
readonly dnscustomcnamefile="/etc/dnsmasq.d/05-pihole-custom-cname.conf"
|
|
||||||
|
|
||||||
readonly gravityDBfile="/etc/pihole/gravity.db"
|
|
||||||
|
|
||||||
# Source install script for ${setupVars}, ${PI_HOLE_BIN_DIR} and valid_ip()
|
|
||||||
readonly PI_HOLE_FILES_DIR="/etc/.pihole"
|
|
||||||
# shellcheck disable=SC2034 # used in basic-install
|
|
||||||
PH_TEST="true"
|
|
||||||
source "${PI_HOLE_FILES_DIR}/automated install/basic-install.sh"
|
|
||||||
|
|
||||||
coltable="/opt/pihole/COL_TABLE"
|
coltable="/opt/pihole/COL_TABLE"
|
||||||
if [[ -f ${coltable} ]]; then
|
if [[ -f ${coltable} ]]; then
|
||||||
@@ -41,6 +32,7 @@ Options:
|
|||||||
-c, celsius Set Celsius as preferred temperature unit
|
-c, celsius Set Celsius as preferred temperature unit
|
||||||
-f, fahrenheit Set Fahrenheit as preferred temperature unit
|
-f, fahrenheit Set Fahrenheit as preferred temperature unit
|
||||||
-k, kelvin Set Kelvin as preferred temperature unit
|
-k, kelvin Set Kelvin as preferred temperature unit
|
||||||
|
-r, hostrecord Add a name to the DNS associated to an IPv4/IPv6 address
|
||||||
-e, email Set an administrative contact address for the Block Page
|
-e, email Set an administrative contact address for the Block Page
|
||||||
-h, --help Show this help dialog
|
-h, --help Show this help dialog
|
||||||
-i, interface Specify dnsmasq's interface listening behavior
|
-i, interface Specify dnsmasq's interface listening behavior
|
||||||
@@ -93,9 +85,9 @@ SetTemperatureUnit() {
|
|||||||
|
|
||||||
HashPassword() {
|
HashPassword() {
|
||||||
# Compute password hash twice to avoid rainbow table vulnerability
|
# Compute password hash twice to avoid rainbow table vulnerability
|
||||||
return=$(echo -n "${1}" | sha256sum | sed 's/\s.*$//')
|
return=$(echo -n ${1} | sha256sum | sed 's/\s.*$//')
|
||||||
return=$(echo -n "${return}" | sha256sum | sed 's/\s.*$//')
|
return=$(echo -n ${return} | sha256sum | sed 's/\s.*$//')
|
||||||
echo "${return}"
|
echo ${return}
|
||||||
}
|
}
|
||||||
|
|
||||||
SetWebPassword() {
|
SetWebPassword() {
|
||||||
@@ -149,18 +141,18 @@ ProcessDNSSettings() {
|
|||||||
delete_dnsmasq_setting "server"
|
delete_dnsmasq_setting "server"
|
||||||
|
|
||||||
COUNTER=1
|
COUNTER=1
|
||||||
while true ; do
|
while [[ 1 ]]; do
|
||||||
var=PIHOLE_DNS_${COUNTER}
|
var=PIHOLE_DNS_${COUNTER}
|
||||||
if [ -z "${!var}" ]; then
|
if [ -z "${!var}" ]; then
|
||||||
break;
|
break;
|
||||||
fi
|
fi
|
||||||
add_dnsmasq_setting "server" "${!var}"
|
add_dnsmasq_setting "server" "${!var}"
|
||||||
(( COUNTER++ ))
|
let COUNTER=COUNTER+1
|
||||||
done
|
done
|
||||||
|
|
||||||
# The option LOCAL_DNS_PORT is deprecated
|
# The option LOCAL_DNS_PORT is deprecated
|
||||||
# We apply it once more, and then convert it into the current format
|
# We apply it once more, and then convert it into the current format
|
||||||
if [ -n "${LOCAL_DNS_PORT}" ]; then
|
if [ ! -z "${LOCAL_DNS_PORT}" ]; then
|
||||||
add_dnsmasq_setting "server" "127.0.0.1#${LOCAL_DNS_PORT}"
|
add_dnsmasq_setting "server" "127.0.0.1#${LOCAL_DNS_PORT}"
|
||||||
add_setting "PIHOLE_DNS_${COUNTER}" "127.0.0.1#${LOCAL_DNS_PORT}"
|
add_setting "PIHOLE_DNS_${COUNTER}" "127.0.0.1#${LOCAL_DNS_PORT}"
|
||||||
delete_setting "LOCAL_DNS_PORT"
|
delete_setting "LOCAL_DNS_PORT"
|
||||||
@@ -183,13 +175,14 @@ ProcessDNSSettings() {
|
|||||||
|
|
||||||
if [[ "${DNSSEC}" == true ]]; then
|
if [[ "${DNSSEC}" == true ]]; then
|
||||||
echo "dnssec
|
echo "dnssec
|
||||||
|
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
|
||||||
trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D
|
trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D
|
||||||
" >> "${dnsmasqconfig}"
|
" >> "${dnsmasqconfig}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
delete_dnsmasq_setting "host-record"
|
delete_dnsmasq_setting "host-record"
|
||||||
|
|
||||||
if [ -n "${HOSTRECORD}" ]; then
|
if [ ! -z "${HOSTRECORD}" ]; then
|
||||||
add_dnsmasq_setting "host-record" "${HOSTRECORD}"
|
add_dnsmasq_setting "host-record" "${HOSTRECORD}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -214,40 +207,9 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${CONDITIONAL_FORWARDING}" == true ]]; then
|
if [[ "${CONDITIONAL_FORWARDING}" == true ]]; then
|
||||||
# Convert legacy "conditional forwarding" to rev-server configuration
|
add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_DOMAIN}/${CONDITIONAL_FORWARDING_IP}"
|
||||||
REV_SERVER=true
|
add_dnsmasq_setting "server=/${CONDITIONAL_FORWARDING_REVERSE}/${CONDITIONAL_FORWARDING_IP}"
|
||||||
add_setting "REV_SERVER" "true"
|
|
||||||
|
|
||||||
REV_SERVER_DOMAIN="${CONDITIONAL_FORWARDING_DOMAIN}"
|
|
||||||
add_setting "REV_SERVER_DOMAIN" "${REV_SERVER_DOMAIN}"
|
|
||||||
|
|
||||||
REV_SERVER_TARGET="${CONDITIONAL_FORWARDING_IP}"
|
|
||||||
add_setting "REV_SERVER_TARGET" "${REV_SERVER_TARGET}"
|
|
||||||
|
|
||||||
# Remove obsolete settings from setupVars.conf
|
|
||||||
delete_setting "CONDITIONAL_FORWARDING"
|
|
||||||
delete_setting "CONDITIONAL_FORWARDING_REVERSE"
|
|
||||||
delete_setting "CONDITIONAL_FORWARDING_DOMAIN"
|
|
||||||
delete_setting "CONDITIONAL_FORWARDING_IP"
|
|
||||||
|
|
||||||
# Convert existing input to /24 subnet (preserves legacy behavior)
|
|
||||||
# This sed converts "192.168.1.2" to "192.168.1.0/24"
|
|
||||||
# shellcheck disable=2001
|
|
||||||
REV_SERVER_CIDR="$(sed "s+\\.[0-9]*$+\\.0/24+" <<< "${REV_SERVER_TARGET}")"
|
|
||||||
add_setting "REV_SERVER_CIDR" "${REV_SERVER_CIDR}"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${REV_SERVER}" == true ]]; then
|
|
||||||
add_dnsmasq_setting "rev-server=${REV_SERVER_CIDR},${REV_SERVER_TARGET}"
|
|
||||||
if [ -n "${REV_SERVER_DOMAIN}" ]; then
|
|
||||||
add_dnsmasq_setting "server=/${REV_SERVER_DOMAIN}/${REV_SERVER_TARGET}"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prevent Firefox from automatically switching over to DNS-over-HTTPS
|
|
||||||
# This follows https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https
|
|
||||||
# (sourced 7th September 2019)
|
|
||||||
add_dnsmasq_setting "server=/use-application-dns.net/"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SetDNSServers() {
|
SetDNSServers() {
|
||||||
@@ -256,16 +218,7 @@ SetDNSServers() {
|
|||||||
IFS=',' read -r -a array <<< "${args[2]}"
|
IFS=',' read -r -a array <<< "${args[2]}"
|
||||||
for index in "${!array[@]}"
|
for index in "${!array[@]}"
|
||||||
do
|
do
|
||||||
# Replace possible "\#" by "#". This fixes AdminLTE#1427
|
add_setting "PIHOLE_DNS_$((index+1))" "${array[index]}"
|
||||||
local ip
|
|
||||||
ip="${array[index]//\\#/#}"
|
|
||||||
|
|
||||||
if valid_ip "${ip}" || valid_ip6 "${ip}" ; then
|
|
||||||
add_setting "PIHOLE_DNS_$((index+1))" "${ip}"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} Invalid IP has been passed"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
done
|
done
|
||||||
|
|
||||||
if [[ "${args[3]}" == "domain-needed" ]]; then
|
if [[ "${args[3]}" == "domain-needed" ]]; then
|
||||||
@@ -286,13 +239,16 @@ SetDNSServers() {
|
|||||||
change_setting "DNSSEC" "false"
|
change_setting "DNSSEC" "false"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${args[6]}" == "rev-server" ]]; then
|
if [[ "${args[6]}" == "conditional_forwarding" ]]; then
|
||||||
change_setting "REV_SERVER" "true"
|
change_setting "CONDITIONAL_FORWARDING" "true"
|
||||||
change_setting "REV_SERVER_CIDR" "${args[7]}"
|
change_setting "CONDITIONAL_FORWARDING_IP" "${args[7]}"
|
||||||
change_setting "REV_SERVER_TARGET" "${args[8]}"
|
change_setting "CONDITIONAL_FORWARDING_DOMAIN" "${args[8]}"
|
||||||
change_setting "REV_SERVER_DOMAIN" "${args[9]}"
|
change_setting "CONDITIONAL_FORWARDING_REVERSE" "${args[9]}"
|
||||||
else
|
else
|
||||||
change_setting "REV_SERVER" "false"
|
change_setting "CONDITIONAL_FORWARDING" "false"
|
||||||
|
delete_setting "CONDITIONAL_FORWARDING_IP"
|
||||||
|
delete_setting "CONDITIONAL_FORWARDING_DOMAIN"
|
||||||
|
delete_setting "CONDITIONAL_FORWARDING_REVERSE"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
ProcessDNSSettings
|
ProcessDNSSettings
|
||||||
@@ -318,7 +274,7 @@ Reboot() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
RestartDNS() {
|
RestartDNS() {
|
||||||
"${PI_HOLE_BIN_DIR}"/pihole restartdns
|
/usr/local/bin/pihole restartdns
|
||||||
}
|
}
|
||||||
|
|
||||||
SetQueryLogOptions() {
|
SetQueryLogOptions() {
|
||||||
@@ -366,7 +322,6 @@ dhcp-option=option:router,${DHCP_ROUTER}
|
|||||||
dhcp-leasefile=/etc/pihole/dhcp.leases
|
dhcp-leasefile=/etc/pihole/dhcp.leases
|
||||||
#quiet-dhcp
|
#quiet-dhcp
|
||||||
" > "${dhcpconfig}"
|
" > "${dhcpconfig}"
|
||||||
chmod 644 "${dhcpconfig}"
|
|
||||||
|
|
||||||
if [[ "${PIHOLE_DOMAIN}" != "none" ]]; then
|
if [[ "${PIHOLE_DOMAIN}" != "none" ]]; then
|
||||||
echo "domain=${PIHOLE_DOMAIN}" >> "${dhcpconfig}"
|
echo "domain=${PIHOLE_DOMAIN}" >> "${dhcpconfig}"
|
||||||
@@ -408,14 +363,6 @@ EnableDHCP() {
|
|||||||
delete_dnsmasq_setting "dhcp-"
|
delete_dnsmasq_setting "dhcp-"
|
||||||
delete_dnsmasq_setting "quiet-dhcp"
|
delete_dnsmasq_setting "quiet-dhcp"
|
||||||
|
|
||||||
# If a DHCP client claims that its name is "wpad", ignore that.
|
|
||||||
# This fixes a security hole. see CERT Vulnerability VU#598349
|
|
||||||
# We also ignore "localhost" as Windows behaves strangely if a
|
|
||||||
# device claims this host name
|
|
||||||
add_dnsmasq_setting "dhcp-name-match=set:hostname-ignore,wpad
|
|
||||||
dhcp-name-match=set:hostname-ignore,localhost
|
|
||||||
dhcp-ignore-names=tag:hostname-ignore"
|
|
||||||
|
|
||||||
ProcessDHCPSettings
|
ProcessDHCPSettings
|
||||||
|
|
||||||
RestartDNS
|
RestartDNS
|
||||||
@@ -437,44 +384,24 @@ SetWebUILayout() {
|
|||||||
change_setting "WEBUIBOXEDLAYOUT" "${args[2]}"
|
change_setting "WEBUIBOXEDLAYOUT" "${args[2]}"
|
||||||
}
|
}
|
||||||
|
|
||||||
SetWebUITheme() {
|
|
||||||
change_setting "WEBTHEME" "${args[2]}"
|
|
||||||
}
|
|
||||||
|
|
||||||
CheckUrl(){
|
|
||||||
local regex
|
|
||||||
# Check for characters NOT allowed in URLs
|
|
||||||
regex="[^a-zA-Z0-9:/?&%=~._-]"
|
|
||||||
if [[ "${1}" =~ ${regex} ]]; then
|
|
||||||
return 1
|
|
||||||
else
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
CustomizeAdLists() {
|
CustomizeAdLists() {
|
||||||
local address
|
list="/etc/pihole/adlists.list"
|
||||||
address="${args[3]}"
|
|
||||||
local comment
|
|
||||||
comment="${args[4]}"
|
|
||||||
|
|
||||||
if CheckUrl "${address}"; then
|
|
||||||
if [[ "${args[2]}" == "enable" ]]; then
|
if [[ "${args[2]}" == "enable" ]]; then
|
||||||
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'"
|
sed -i "\\@${args[3]}@s/^#http/http/g" "${list}"
|
||||||
elif [[ "${args[2]}" == "disable" ]]; then
|
elif [[ "${args[2]}" == "disable" ]]; then
|
||||||
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'"
|
sed -i "\\@${args[3]}@s/^http/#http/g" "${list}"
|
||||||
elif [[ "${args[2]}" == "add" ]]; then
|
elif [[ "${args[2]}" == "add" ]]; then
|
||||||
sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')"
|
if [[ $(grep -c "^${args[3]}$" "${list}") -eq 0 ]] ; then
|
||||||
|
echo "${args[3]}" >> ${list}
|
||||||
|
fi
|
||||||
elif [[ "${args[2]}" == "del" ]]; then
|
elif [[ "${args[2]}" == "del" ]]; then
|
||||||
sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'"
|
var=$(echo "${args[3]}" | sed 's/\//\\\//g')
|
||||||
|
sed -i "/${var}/Id" "${list}"
|
||||||
else
|
else
|
||||||
echo "Not permitted"
|
echo "Not permitted"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
else
|
|
||||||
echo "Invalid Url"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SetPrivacyMode() {
|
SetPrivacyMode() {
|
||||||
@@ -518,6 +445,32 @@ RemoveDHCPStaticAddress() {
|
|||||||
sed -i "/dhcp-host=${mac}.*/d" "${dhcpstaticconfig}"
|
sed -i "/dhcp-host=${mac}.*/d" "${dhcpstaticconfig}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
SetHostRecord() {
|
||||||
|
if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then
|
||||||
|
echo "Usage: pihole -a hostrecord <domain> [IPv4-address],[IPv6-address]
|
||||||
|
Example: 'pihole -a hostrecord home.domain.com 192.168.1.1,2001:db8:a0b:12f0::1'
|
||||||
|
Add a name to the DNS associated to an IPv4/IPv6 address
|
||||||
|
|
||||||
|
Options:
|
||||||
|
\"\" Empty: Remove host record
|
||||||
|
-h, --help Show this help dialog"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "${args[3]}" ]]; then
|
||||||
|
change_setting "HOSTRECORD" "${args[2]},${args[3]}"
|
||||||
|
echo -e " ${TICK} Setting host record for ${args[2]} to ${args[3]}"
|
||||||
|
else
|
||||||
|
change_setting "HOSTRECORD" ""
|
||||||
|
echo -e " ${TICK} Removing host record"
|
||||||
|
fi
|
||||||
|
|
||||||
|
ProcessDNSSettings
|
||||||
|
|
||||||
|
# Restart dnsmasq to load new configuration
|
||||||
|
RestartDNS
|
||||||
|
}
|
||||||
|
|
||||||
SetAdminEmail() {
|
SetAdminEmail() {
|
||||||
if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then
|
if [[ "${1}" == "-h" ]] || [[ "${1}" == "--help" ]]; then
|
||||||
echo "Usage: pihole -a email <address>
|
echo "Usage: pihole -a email <address>
|
||||||
@@ -531,16 +484,6 @@ Options:
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "${args[2]}" ]]; then
|
if [[ -n "${args[2]}" ]]; then
|
||||||
|
|
||||||
# Sanitize email address in case of security issues
|
|
||||||
# Regex from https://stackoverflow.com/a/2138832/4065967
|
|
||||||
local regex
|
|
||||||
regex="^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\$"
|
|
||||||
if [[ ! "${args[2]}" =~ ${regex} ]]; then
|
|
||||||
echo -e " ${CROSS} Invalid email address"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
change_setting "ADMIN_EMAIL" "${args[2]}"
|
change_setting "ADMIN_EMAIL" "${args[2]}"
|
||||||
echo -e " ${TICK} Setting admin contact to ${args[2]}"
|
echo -e " ${TICK} Setting admin contact to ${args[2]}"
|
||||||
else
|
else
|
||||||
@@ -566,10 +509,10 @@ Interfaces:
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${args[2]}" == "all" ]]; then
|
if [[ "${args[2]}" == "all" ]]; then
|
||||||
echo -e " ${INFO} Listening on all interfaces, permitting all origins. Please use a firewall!"
|
echo -e " ${INFO} Listening on all interfaces, permiting all origins. Please use a firewall!"
|
||||||
change_setting "DNSMASQ_LISTENING" "all"
|
change_setting "DNSMASQ_LISTENING" "all"
|
||||||
elif [[ "${args[2]}" == "local" ]]; then
|
elif [[ "${args[2]}" == "local" ]]; then
|
||||||
echo -e " ${INFO} Listening on all interfaces, permitting origins from one hop away (LAN)"
|
echo -e " ${INFO} Listening on all interfaces, permiting origins from one hop away (LAN)"
|
||||||
change_setting "DNSMASQ_LISTENING" "local"
|
change_setting "DNSMASQ_LISTENING" "local"
|
||||||
else
|
else
|
||||||
echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}"
|
echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}"
|
||||||
@@ -586,104 +529,32 @@ Interfaces:
|
|||||||
}
|
}
|
||||||
|
|
||||||
Teleporter() {
|
Teleporter() {
|
||||||
local datetimestamp
|
local datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S")
|
||||||
datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S")
|
|
||||||
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-teleporter_${datetimestamp}.tar.gz"
|
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-teleporter_${datetimestamp}.tar.gz"
|
||||||
}
|
}
|
||||||
|
|
||||||
checkDomain()
|
|
||||||
{
|
|
||||||
local domain validDomain
|
|
||||||
# Convert to lowercase
|
|
||||||
domain="${1,,}"
|
|
||||||
validDomain=$(grep -P "^((-|_)*[a-z\\d]((-|_)*[a-z\\d])*(-|_)*)(\\.(-|_)*([a-z\\d]((-|_)*[a-z\\d])*))*$" <<< "${domain}") # Valid chars check
|
|
||||||
validDomain=$(grep -P "^[^\\.]{1,63}(\\.[^\\.]{1,63})*$" <<< "${validDomain}") # Length of each label
|
|
||||||
echo "${validDomain}"
|
|
||||||
}
|
|
||||||
|
|
||||||
addAudit()
|
addAudit()
|
||||||
{
|
{
|
||||||
shift # skip "-a"
|
shift # skip "-a"
|
||||||
shift # skip "audit"
|
shift # skip "audit"
|
||||||
local domains validDomain
|
for var in "$@"
|
||||||
domains=""
|
|
||||||
for domain in "$@"
|
|
||||||
do
|
do
|
||||||
# Check domain to be added. Only continue if it is valid
|
echo "${var}" >> /etc/pihole/auditlog.list
|
||||||
validDomain="$(checkDomain "${domain}")"
|
|
||||||
if [[ -n "${validDomain}" ]]; then
|
|
||||||
# Put comma in between domains when there is
|
|
||||||
# more than one domains to be added
|
|
||||||
# SQL INSERT allows adding multiple rows at once using the format
|
|
||||||
## INSERT INTO table (domain) VALUES ('abc.de'),('fgh.ij'),('klm.no'),('pqr.st');
|
|
||||||
if [[ -n "${domains}" ]]; then
|
|
||||||
domains="${domains},"
|
|
||||||
fi
|
|
||||||
domains="${domains}('${domain}')"
|
|
||||||
fi
|
|
||||||
done
|
done
|
||||||
# Insert only the domain here. The date_added field will be
|
|
||||||
# filled with its default value (date_added = current timestamp)
|
|
||||||
sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
clearAudit()
|
clearAudit()
|
||||||
{
|
{
|
||||||
sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;"
|
echo -n "" > /etc/pihole/auditlog.list
|
||||||
}
|
}
|
||||||
|
|
||||||
SetPrivacyLevel() {
|
SetPrivacyLevel() {
|
||||||
# Set privacy level. Minimum is 0, maximum is 4
|
# Set privacy level. Minimum is 0, maximum is 4
|
||||||
if [ "${args[2]}" -ge 0 ] && [ "${args[2]}" -le 4 ]; then
|
if [ "${args[2]}" -ge 0 ] && [ "${args[2]}" -le 4 ]; then
|
||||||
changeFTLsetting "PRIVACYLEVEL" "${args[2]}"
|
changeFTLsetting "PRIVACYLEVEL" "${args[2]}"
|
||||||
pihole restartdns reload-lists
|
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
AddCustomDNSAddress() {
|
|
||||||
echo -e " ${TICK} Adding custom DNS entry..."
|
|
||||||
|
|
||||||
ip="${args[2]}"
|
|
||||||
host="${args[3]}"
|
|
||||||
echo "${ip} ${host}" >> "${dnscustomfile}"
|
|
||||||
|
|
||||||
# Restart dnsmasq to load new custom DNS entries
|
|
||||||
RestartDNS
|
|
||||||
}
|
|
||||||
|
|
||||||
RemoveCustomDNSAddress() {
|
|
||||||
echo -e " ${TICK} Removing custom DNS entry..."
|
|
||||||
|
|
||||||
ip="${args[2]}"
|
|
||||||
host="${args[3]}"
|
|
||||||
sed -i "/${ip} ${host}/d" "${dnscustomfile}"
|
|
||||||
|
|
||||||
# Restart dnsmasq to update removed custom DNS entries
|
|
||||||
RestartDNS
|
|
||||||
}
|
|
||||||
|
|
||||||
AddCustomCNAMERecord() {
|
|
||||||
echo -e " ${TICK} Adding custom CNAME record..."
|
|
||||||
|
|
||||||
domain="${args[2]}"
|
|
||||||
target="${args[3]}"
|
|
||||||
echo "cname=${domain},${target}" >> "${dnscustomcnamefile}"
|
|
||||||
|
|
||||||
# Restart dnsmasq to load new custom CNAME records
|
|
||||||
RestartDNS
|
|
||||||
}
|
|
||||||
|
|
||||||
RemoveCustomCNAMERecord() {
|
|
||||||
echo -e " ${TICK} Removing custom CNAME record..."
|
|
||||||
|
|
||||||
domain="${args[2]}"
|
|
||||||
target="${args[3]}"
|
|
||||||
sed -i "/cname=${domain},${target}/d" "${dnscustomcnamefile}"
|
|
||||||
|
|
||||||
# Restart dnsmasq to update removed custom CNAME records
|
|
||||||
RestartDNS
|
|
||||||
}
|
|
||||||
|
|
||||||
main() {
|
main() {
|
||||||
args=("$@")
|
args=("$@")
|
||||||
|
|
||||||
@@ -702,12 +573,12 @@ main() {
|
|||||||
"enabledhcp" ) EnableDHCP;;
|
"enabledhcp" ) EnableDHCP;;
|
||||||
"disabledhcp" ) DisableDHCP;;
|
"disabledhcp" ) DisableDHCP;;
|
||||||
"layout" ) SetWebUILayout;;
|
"layout" ) SetWebUILayout;;
|
||||||
"theme" ) SetWebUITheme;;
|
|
||||||
"-h" | "--help" ) helpFunc;;
|
"-h" | "--help" ) helpFunc;;
|
||||||
"privacymode" ) SetPrivacyMode;;
|
"privacymode" ) SetPrivacyMode;;
|
||||||
"resolve" ) ResolutionSettings;;
|
"resolve" ) ResolutionSettings;;
|
||||||
"addstaticdhcp" ) AddDHCPStaticAddress;;
|
"addstaticdhcp" ) AddDHCPStaticAddress;;
|
||||||
"removestaticdhcp" ) RemoveDHCPStaticAddress;;
|
"removestaticdhcp" ) RemoveDHCPStaticAddress;;
|
||||||
|
"-r" | "hostrecord" ) SetHostRecord "$3";;
|
||||||
"-e" | "email" ) SetAdminEmail "$3";;
|
"-e" | "email" ) SetAdminEmail "$3";;
|
||||||
"-i" | "interface" ) SetListeningMode "$@";;
|
"-i" | "interface" ) SetListeningMode "$@";;
|
||||||
"-t" | "teleporter" ) Teleporter;;
|
"-t" | "teleporter" ) Teleporter;;
|
||||||
@@ -715,10 +586,6 @@ main() {
|
|||||||
"audit" ) addAudit "$@";;
|
"audit" ) addAudit "$@";;
|
||||||
"clearaudit" ) clearAudit;;
|
"clearaudit" ) clearAudit;;
|
||||||
"-l" | "privacylevel" ) SetPrivacyLevel;;
|
"-l" | "privacylevel" ) SetPrivacyLevel;;
|
||||||
"addcustomdns" ) AddCustomDNSAddress;;
|
|
||||||
"removecustomdns" ) RemoveCustomDNSAddress;;
|
|
||||||
"addcustomcname" ) AddCustomCNAMERecord;;
|
|
||||||
"removecustomcname" ) RemoveCustomCNAMERecord;;
|
|
||||||
* ) helpFunc;;
|
* ) helpFunc;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
#!/usr/bin/env bash
|
#!/bin/bash
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
# Pi-hole: A black hole for Internet advertisements
|
||||||
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
||||||
# Network-wide ad blocking via your own hardware.
|
# Network-wide ad blocking via your own hardware.
|
||||||
|
@@ -1,189 +0,0 @@
|
|||||||
PRAGMA foreign_keys=OFF;
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
CREATE TABLE "group"
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
name TEXT UNIQUE NOT NULL,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
description TEXT
|
|
||||||
);
|
|
||||||
INSERT INTO "group" (id,enabled,name,description) VALUES (0,1,'Default','The default group');
|
|
||||||
|
|
||||||
CREATE TABLE domainlist
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
type INTEGER NOT NULL DEFAULT 0,
|
|
||||||
domain TEXT NOT NULL,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
comment TEXT,
|
|
||||||
UNIQUE(domain, type)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE adlist
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
address TEXT UNIQUE NOT NULL,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT 1,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
comment TEXT
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE adlist_by_group
|
|
||||||
(
|
|
||||||
adlist_id INTEGER NOT NULL REFERENCES adlist (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (adlist_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE gravity
|
|
||||||
(
|
|
||||||
domain TEXT NOT NULL,
|
|
||||||
adlist_id INTEGER NOT NULL REFERENCES adlist (id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE info
|
|
||||||
(
|
|
||||||
property TEXT PRIMARY KEY,
|
|
||||||
value TEXT NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
INSERT INTO "info" VALUES('version','12');
|
|
||||||
|
|
||||||
CREATE TABLE domain_audit
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT UNIQUE NOT NULL,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int))
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE domainlist_by_group
|
|
||||||
(
|
|
||||||
domainlist_id INTEGER NOT NULL REFERENCES domainlist (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (domainlist_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE client
|
|
||||||
(
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
ip TEXT NOL NULL UNIQUE,
|
|
||||||
date_added INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
date_modified INTEGER NOT NULL DEFAULT (cast(strftime('%s', 'now') as int)),
|
|
||||||
comment TEXT
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE client_by_group
|
|
||||||
(
|
|
||||||
client_id INTEGER NOT NULL REFERENCES client (id),
|
|
||||||
group_id INTEGER NOT NULL REFERENCES "group" (id),
|
|
||||||
PRIMARY KEY (client_id, group_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_update AFTER UPDATE ON adlist
|
|
||||||
BEGIN
|
|
||||||
UPDATE adlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE address = NEW.address;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_update AFTER UPDATE ON client
|
|
||||||
BEGIN
|
|
||||||
UPDATE client SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE ip = NEW.ip;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_update AFTER UPDATE ON domainlist
|
|
||||||
BEGIN
|
|
||||||
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 0
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 1
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 2
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
CREATE VIEW vw_regex_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
|
||||||
FROM domainlist
|
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
|
||||||
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
AND domainlist.type = 3
|
|
||||||
ORDER BY domainlist.id;
|
|
||||||
|
|
||||||
CREATE VIEW vw_gravity AS SELECT domain, adlist_by_group.group_id AS group_id
|
|
||||||
FROM gravity
|
|
||||||
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = gravity.adlist_id
|
|
||||||
LEFT JOIN adlist ON adlist.id = gravity.adlist_id
|
|
||||||
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
|
|
||||||
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1);
|
|
||||||
|
|
||||||
CREATE VIEW vw_adlist AS SELECT DISTINCT address, adlist.id AS id
|
|
||||||
FROM adlist
|
|
||||||
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
|
|
||||||
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
|
|
||||||
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
|
||||||
ORDER BY adlist.id;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_add AFTER INSERT ON client
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_update AFTER UPDATE ON "group"
|
|
||||||
BEGIN
|
|
||||||
UPDATE "group" SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE id = NEW.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_group_zero AFTER DELETE ON "group"
|
|
||||||
BEGIN
|
|
||||||
INSERT OR IGNORE INTO "group" (id,enabled,name) VALUES (0,1,'Default');
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_delete AFTER DELETE ON domainlist
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM domainlist_by_group WHERE domainlist_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_delete AFTER DELETE ON adlist
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM adlist_by_group WHERE adlist_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_delete AFTER DELETE ON client
|
|
||||||
BEGIN
|
|
||||||
DELETE FROM client_by_group WHERE client_id = OLD.id;
|
|
||||||
END;
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,42 +0,0 @@
|
|||||||
.timeout 30000
|
|
||||||
|
|
||||||
ATTACH DATABASE '/etc/pihole/gravity.db' AS OLD;
|
|
||||||
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
|
|
||||||
DROP TRIGGER tr_domainlist_add;
|
|
||||||
DROP TRIGGER tr_client_add;
|
|
||||||
DROP TRIGGER tr_adlist_add;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO "group" SELECT * FROM OLD."group";
|
|
||||||
INSERT OR REPLACE INTO domain_audit SELECT * FROM OLD.domain_audit;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO domainlist SELECT * FROM OLD.domainlist;
|
|
||||||
INSERT OR REPLACE INTO domainlist_by_group SELECT * FROM OLD.domainlist_by_group;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO adlist SELECT * FROM OLD.adlist;
|
|
||||||
INSERT OR REPLACE INTO adlist_by_group SELECT * FROM OLD.adlist_by_group;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO info SELECT * FROM OLD.info;
|
|
||||||
|
|
||||||
INSERT OR REPLACE INTO client SELECT * FROM OLD.client;
|
|
||||||
INSERT OR REPLACE INTO client_by_group SELECT * FROM OLD.client_by_group;
|
|
||||||
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO domainlist_by_group (domainlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_client_add AFTER INSERT ON client
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO client_by_group (client_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
CREATE TRIGGER tr_adlist_add AFTER INSERT ON adlist
|
|
||||||
BEGIN
|
|
||||||
INSERT INTO adlist_by_group (adlist_id, group_id) VALUES (NEW.id, 0);
|
|
||||||
END;
|
|
||||||
|
|
||||||
|
|
||||||
COMMIT;
|
|
@@ -1,4 +1,4 @@
|
|||||||
/var/log/pihole/pihole.log {
|
/var/log/pihole.log {
|
||||||
# su #
|
# su #
|
||||||
daily
|
daily
|
||||||
copytruncate
|
copytruncate
|
||||||
@@ -9,7 +9,7 @@
|
|||||||
nomail
|
nomail
|
||||||
}
|
}
|
||||||
|
|
||||||
/var/log/pihole/pihole-FTL.log {
|
/var/log/pihole-FTL.log {
|
||||||
# su #
|
# su #
|
||||||
weekly
|
weekly
|
||||||
copytruncate
|
copytruncate
|
||||||
|
@@ -1,8 +1,8 @@
|
|||||||
#!/usr/bin/env bash
|
#!/bin/bash
|
||||||
### BEGIN INIT INFO
|
### BEGIN INIT INFO
|
||||||
# Provides: pihole-FTL
|
# Provides: pihole-FTL
|
||||||
# Required-Start: $remote_fs $syslog $network
|
# Required-Start: $remote_fs $syslog
|
||||||
# Required-Stop: $remote_fs $syslog $network
|
# Required-Stop: $remote_fs $syslog
|
||||||
# Default-Start: 2 3 4 5
|
# Default-Start: 2 3 4 5
|
||||||
# Default-Stop: 0 1 6
|
# Default-Stop: 0 1 6
|
||||||
# Short-Description: pihole-FTL daemon
|
# Short-Description: pihole-FTL daemon
|
||||||
@@ -10,40 +10,37 @@
|
|||||||
### END INIT INFO
|
### END INIT INFO
|
||||||
|
|
||||||
FTLUSER=pihole
|
FTLUSER=pihole
|
||||||
PIDFILE=/run/pihole-FTL.pid
|
BINARY="/usr/bin/pihole-FTL"
|
||||||
|
PIDFILE=/var/run/pihole-FTL.pid
|
||||||
is_running() {
|
|
||||||
pgrep -o "pihole-FTL" > /dev/null 2>&1
|
|
||||||
}
|
|
||||||
|
|
||||||
|
. /lib/lsb/init-functions
|
||||||
|
|
||||||
# Start the service
|
# Start the service
|
||||||
start() {
|
start() {
|
||||||
if is_running; then
|
if pidofproc -p "${PIDFILE}" > /dev/null 2>&1; then
|
||||||
echo "pihole-FTL is already running"
|
echo "pihole-FTL is already running"
|
||||||
else
|
else
|
||||||
# Touch files to ensure they exist (create if non-existing, preserve if existing)
|
# Touch files to ensure they exist (create if non-existing, preserve if existing)
|
||||||
touch /var/log/pihole/pihole-FTL.log /var/log/pihole/pihole.log
|
touch /var/log/pihole-FTL.log /var/log/pihole.log
|
||||||
touch /run/pihole-FTL.pid /run/pihole-FTL.port
|
touch /run/pihole-FTL.pid /run/pihole-FTL.port
|
||||||
touch /etc/pihole/dhcp.leases
|
touch /etc/pihole/dhcp.leases
|
||||||
mkdir -p /run/pihole
|
mkdir -p /var/run/pihole
|
||||||
mkdir -p /var/log/pihole
|
mkdir -p /var/log/pihole
|
||||||
chown pihole:pihole /run/pihole /var/log/pihole
|
chown pihole:pihole /var/run/pihole /var/log/pihole
|
||||||
# Remove possible leftovers from previous pihole-FTL processes
|
# Remove possible leftovers from previous pihole-FTL processes
|
||||||
rm -f /dev/shm/FTL-* 2> /dev/null
|
rm -f /dev/shm/FTL-* 2> /dev/null
|
||||||
rm /run/pihole/FTL.sock 2> /dev/null
|
rm /var/run/pihole/FTL.sock 2> /dev/null
|
||||||
# Ensure that permissions are set so that pihole-FTL can edit all necessary files
|
# Ensure that permissions are set so that pihole-FTL can edit all necessary files
|
||||||
chown pihole:pihole /run/pihole-FTL.pid /run/pihole-FTL.port
|
chown pihole:pihole /run/pihole-FTL.pid /run/pihole-FTL.port
|
||||||
chown pihole:pihole /etc/pihole /etc/pihole/dhcp.leases 2> /dev/null
|
chown pihole:pihole /etc/pihole /etc/pihole/dhcp.leases 2> /dev/null
|
||||||
chown pihole:pihole /var/log/pihole/pihole-FTL.log /var/log/pihole/pihole.log
|
chown pihole:pihole /var/log/pihole-FTL.log /var/log/pihole.log
|
||||||
chmod 0644 /var/log/pihole/pihole-FTL.log /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole/pihole.log
|
chmod 0644 /var/log/pihole-FTL.log /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole.log
|
||||||
# Chown database files to the user FTL runs as. We ignore errors as the files may not (yet) exist
|
echo "nameserver 127.0.0.1" | /sbin/resolvconf -a lo.piholeFTL
|
||||||
chown pihole:pihole /etc/pihole/pihole-FTL.db /etc/pihole/gravity.db 2> /dev/null
|
if setcap CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_NET_ADMIN+eip "$(which pihole-FTL)"; then
|
||||||
if setcap CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_NET_ADMIN,CAP_SYS_NICE+eip "$(which pihole-FTL)"; then
|
start_daemon -p "${PIDFILE}" /usr/bin/su -s /bin/sh -c "${BINARY} -f" "$FTLUSER" &
|
||||||
su -s /bin/sh -c "/usr/bin/pihole-FTL" "$FTLUSER"
|
|
||||||
else
|
else
|
||||||
echo "Warning: Starting pihole-FTL as root because setting capabilities is not supported on this system"
|
echo "Warning: Starting pihole-FTL as root because setting capabilities is not supported on this system"
|
||||||
pihole-FTL
|
start_daemon -p "${PIDFILE}" "${BINARY}" -f &
|
||||||
fi
|
fi
|
||||||
echo
|
echo
|
||||||
fi
|
fi
|
||||||
@@ -51,10 +48,11 @@ start() {
|
|||||||
|
|
||||||
# Stop the service
|
# Stop the service
|
||||||
stop() {
|
stop() {
|
||||||
if is_running; then
|
if pidofproc -p "${PIDFILE}" > /dev/null 2>&1; then
|
||||||
pkill -o pihole-FTL
|
/sbin/resolvconf -d lo.piholeFTL
|
||||||
|
killproc -p "${PIDFILE}" "${BINARY}"
|
||||||
for i in {1..5}; do
|
for i in {1..5}; do
|
||||||
if ! is_running; then
|
if ! pidofproc -p "${PIDFILE}" > /dev/null 2>&1; then
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -63,9 +61,9 @@ stop() {
|
|||||||
done
|
done
|
||||||
echo
|
echo
|
||||||
|
|
||||||
if is_running; then
|
if pidofproc -p "${PIDFILE}" > /dev/null 2>&1; then
|
||||||
echo "Not stopped; may still be shutting down or shutdown may have failed, killing now"
|
echo "Not stopped; may still be shutting down or shutdown may have failed, killing now"
|
||||||
pkill -o -9 pihole-FTL
|
killproc -p "${PIDFILE}" "${BINARY}" 9
|
||||||
exit 1
|
exit 1
|
||||||
else
|
else
|
||||||
echo "Stopped"
|
echo "Stopped"
|
||||||
@@ -78,7 +76,7 @@ stop() {
|
|||||||
|
|
||||||
# Indicate the service status
|
# Indicate the service status
|
||||||
status() {
|
status() {
|
||||||
if is_running; then
|
if pidofproc -p "${PIDFILE}" > /dev/null 2>&1; then
|
||||||
echo "[ ok ] pihole-FTL is running"
|
echo "[ ok ] pihole-FTL is running"
|
||||||
exit 0
|
exit 0
|
||||||
else
|
else
|
||||||
|
@@ -10,7 +10,7 @@
|
|||||||
#
|
#
|
||||||
#
|
#
|
||||||
# This file is under source-control of the Pi-hole installation and update
|
# This file is under source-control of the Pi-hole installation and update
|
||||||
# scripts, any changes made to this file will be overwritten when the software
|
# scripts, any changes made to this file will be overwritten when the softare
|
||||||
# is updated or re-installed. Please make any changes to the appropriate crontab
|
# is updated or re-installed. Please make any changes to the appropriate crontab
|
||||||
# or other cron file snippets.
|
# or other cron file snippets.
|
||||||
|
|
||||||
@@ -18,19 +18,19 @@
|
|||||||
# early morning. Download any updates from the adlists
|
# early morning. Download any updates from the adlists
|
||||||
# Squash output to log, then splat the log to stdout on error to allow for
|
# Squash output to log, then splat the log to stdout on error to allow for
|
||||||
# standard crontab job error handling.
|
# standard crontab job error handling.
|
||||||
59 1 * * 7 root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updateGravity >/var/log/pihole/pihole_updateGravity.log || cat /var/log/pihole/pihole_updateGravity.log
|
59 1 * * 7 root PATH="$PATH:/usr/local/bin/" pihole updateGravity >/var/log/pihole_updateGravity.log || cat /var/log/pihole_updateGravity.log
|
||||||
|
|
||||||
# Pi-hole: Flush the log daily at 00:00
|
# Pi-hole: Flush the log daily at 00:00
|
||||||
# The flush script will use logrotate if available
|
# The flush script will use logrotate if available
|
||||||
# parameter "once": logrotate only once (default is twice)
|
# parameter "once": logrotate only once (default is twice)
|
||||||
# parameter "quiet": don't print messages
|
# parameter "quiet": don't print messages
|
||||||
00 00 * * * root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole flush once quiet
|
00 00 * * * root PATH="$PATH:/usr/local/bin/" pihole flush once quiet
|
||||||
|
|
||||||
@reboot root /usr/sbin/logrotate /etc/pihole/logrotate
|
@reboot root /usr/sbin/logrotate /etc/pihole/logrotate
|
||||||
|
|
||||||
# Pi-hole: Grab local version and branch every 10 minutes
|
# Pi-hole: Grab local version and branch every 10 minutes
|
||||||
*/10 * * * * root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker local
|
*/10 * * * * root PATH="$PATH:/usr/local/bin/" pihole updatechecker local
|
||||||
|
|
||||||
# Pi-hole: Grab remote version every 24 hours
|
# Pi-hole: Grab remote version every 24 hours
|
||||||
59 17 * * * root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote
|
59 17 * * * root PATH="$PATH:/usr/local/bin/" pihole updatechecker remote
|
||||||
@reboot root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote reboot
|
@reboot root PATH="$PATH:/usr/local/bin/" pihole updatechecker remote reboot
|
||||||
|
@@ -7,7 +7,7 @@ _pihole() {
|
|||||||
|
|
||||||
case "${prev}" in
|
case "${prev}" in
|
||||||
"pihole")
|
"pihole")
|
||||||
opts="admin blacklist checkout chronometer debug disable enable flush help logging query reconfigure regex restartdns status tail uninstall updateGravity updatePihole version wildcard whitelist arpflush"
|
opts="admin blacklist checkout chronometer debug disable enable flush help logging query reconfigure regex restartdns status tail uninstall updateGravity updatePihole version wildcard whitelist"
|
||||||
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
|
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
|
||||||
;;
|
;;
|
||||||
"whitelist"|"blacklist"|"wildcard"|"regex")
|
"whitelist"|"blacklist"|"wildcard"|"regex")
|
||||||
@@ -15,7 +15,7 @@ _pihole() {
|
|||||||
COMPREPLY=( $(compgen -W "${opts_lists}" -- ${cur}) )
|
COMPREPLY=( $(compgen -W "${opts_lists}" -- ${cur}) )
|
||||||
;;
|
;;
|
||||||
"admin")
|
"admin")
|
||||||
opts_admin="celsius email fahrenheit interface kelvin password privacylevel"
|
opts_admin="celsius email fahrenheit hostrecord interface kelvin password privacylevel"
|
||||||
COMPREPLY=( $(compgen -W "${opts_admin}" -- ${cur}) )
|
COMPREPLY=( $(compgen -W "${opts_admin}" -- ${cur}) )
|
||||||
;;
|
;;
|
||||||
"checkout")
|
"checkout")
|
||||||
|
@@ -6,46 +6,45 @@
|
|||||||
* Please see LICENSE file for your rights under this license. */
|
* Please see LICENSE file for your rights under this license. */
|
||||||
|
|
||||||
/* Text Customisation Options ======> */
|
/* Text Customisation Options ======> */
|
||||||
.title::before { content: "Website Blocked"; }
|
.title:before { content: "Website Blocked"; }
|
||||||
.altBtn::before { content: "Why am I here?"; }
|
.altBtn:before { content: "Why am I here?"; }
|
||||||
.linkPH::before { content: "About Pi-hole"; }
|
.linkPH:before { content: "About Pi-hole"; }
|
||||||
.linkEmail::before { content: "Contact Admin"; }
|
.linkEmail:before { content: "Contact Admin"; }
|
||||||
|
|
||||||
#bpOutput.add::before { content: "Info"; }
|
#bpOutput.add:before { content: "Info"; }
|
||||||
#bpOutput.add::after { content: "The domain is being whitelisted..."; }
|
#bpOutput.add:after { content: "The domain is being whitelisted..."; }
|
||||||
#bpOutput.error::before, .unhandled::before { content: "Error"; }
|
#bpOutput.error:before, .unhandled:before { content: "Error"; }
|
||||||
#bpOutput.unhandled::after { content: "An unhandled exception occurred. This may happen when your browser is unable to load jQuery, or when the webserver is denying access to the Pi-hole API."; }
|
#bpOutput.unhandled:after { content: "An unhandled exception occured. This may happen when your browser is unable to load jQuery, or when the webserver is denying access to the Pi-hole API."; }
|
||||||
#bpOutput.success::before { content: "Success"; }
|
#bpOutput.success:before { content: "Success"; }
|
||||||
#bpOutput.success::after { content: "Website has been whitelisted! You may need to flush your DNS cache"; }
|
#bpOutput.success:after { content: "Website has been whitelisted! You may need to flush your DNS cache"; }
|
||||||
|
|
||||||
.recentwl::before { content: "This site has been whitelisted. Please flush your DNS cache and/or restart your browser."; }
|
.recentwl:before { content: "This site has been whitelisted. Please flush your DNS cache and/or restart your browser."; }
|
||||||
.unknown::before { content: "This website is not found in any of Pi-hole's blacklists. The reason you have arrived here is unknown."; }
|
.unknown:before { content: "This website is not found in any of Pi-hole's blacklists. The reason you have arrived here is unknown."; }
|
||||||
.cname::before { content: "This site is an alias for "; } /* <a href="http://cname.com">cname.com</a> */
|
.cname:before { content: "This site is an alias for "; } /* <a href="http://cname.com">cname.com</a> */
|
||||||
.cname::after { content: ", which may be blocked by Pi-hole."; }
|
.cname:after { content: ", which may be blocked by Pi-hole."; }
|
||||||
|
|
||||||
.blacklist::before { content: "Manually Blacklisted"; }
|
.blacklist:before { content: "Manually Blacklisted"; }
|
||||||
.wildcard::before { content: "Manually Blacklisted by Wildcard"; }
|
.wildcard:before { content: "Manually Blacklisted by Wildcard"; }
|
||||||
.noblock::before { content: "Not found on any Blacklist"; }
|
.noblock:before { content: "Not found on any Blacklist"; }
|
||||||
|
|
||||||
#bpBlock::before { content: "Access to the following website has been denied:"; }
|
#bpBlock:before { content: "Access to the following website has been denied:"; }
|
||||||
#bpFlag::before { content: "This is primarily due to being flagged as:"; }
|
#bpFlag:before { content: "This is primarily due to being flagged as:"; }
|
||||||
|
|
||||||
#bpHelpTxt::before { content: "If you have an ongoing use for this website, please "; }
|
#bpHelpTxt:before { content: "If you have an ongoing use for this website, please "; }
|
||||||
#bpHelpTxt a::before, #bpHelpTxt span::before { content: "ask the administrator"; }
|
#bpHelpTxt a:before, #bpHelpTxt span:before { content: "ask the administrator"; }
|
||||||
#bpHelpTxt::after{ content: " of the Pi-hole on this network to have it whitelisted"; }
|
#bpHelpTxt:after{ content: " of the Pi-hole on this network to have it whitelisted"; }
|
||||||
|
|
||||||
#bpBack::before { content: "Back to safety"; }
|
#bpBack:before { content: "Back to safety"; }
|
||||||
#bpInfo::before { content: "Technical Info"; }
|
#bpInfo:before { content: "Technical Info"; }
|
||||||
#bpFoundIn::before { content: "This site is found in "; }
|
#bpFoundIn:before { content: "This site is found in "; }
|
||||||
#bpFoundIn span::after { content: " of "; }
|
#bpFoundIn span:after { content: " of "; }
|
||||||
#bpFoundIn::after { content: " lists:"; }
|
#bpFoundIn:after { content: " lists:"; }
|
||||||
#bpWhitelist::before { content: "Whitelist"; }
|
#bpWhitelist:before { content: "Whitelist"; }
|
||||||
|
|
||||||
footer span::before { content: "Page generated on "; }
|
footer span:before { content: "Page generated on "; }
|
||||||
|
|
||||||
/* Hide whitelisting form entirely */
|
/* Hide whitelisting form entirely */
|
||||||
/* #bpWLButtons { display: none; } */
|
/* #bpWLButtons { display: none; } */
|
||||||
|
|
||||||
/* Text Customisation Options <=============================== */
|
/* Text Customisation Options <=============================== */
|
||||||
|
|
||||||
/* http://necolas.github.io/normalize.css ======> */
|
/* http://necolas.github.io/normalize.css ======> */
|
||||||
@@ -121,20 +120,14 @@ textarea, input, button { outline: none; }
|
|||||||
font-family: "Source Sans Pro";
|
font-family: "Source Sans Pro";
|
||||||
font-style: normal;
|
font-style: normal;
|
||||||
font-weight: 400;
|
font-weight: 400;
|
||||||
font-display: swap;
|
src: local("Source Sans Pro"), local("SourceSansPro-Regular"), url("/admin/style/vendor/SourceSansPro/SourceSansPro-Regular.ttf") format("truetype");
|
||||||
src: local("Source Sans Pro Regular"), local("SourceSansPro-Regular"),
|
|
||||||
url("/admin/style/vendor/SourceSansPro/source-sans-pro-v13-latin-regular.woff2") format("woff2"),
|
|
||||||
url("/admin/style/vendor/SourceSansPro/source-sans-pro-v13-latin-regular.woff") format("woff");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@font-face {
|
@font-face {
|
||||||
font-family: "Source Sans Pro";
|
font-family: "Source Sans Pro";
|
||||||
font-style: normal;
|
font-style: normal;
|
||||||
font-weight: 700;
|
font-weight: 700;
|
||||||
font-display: swap;
|
src: local("Source Sans Pro Bold"), local("SourceSansPro-Bold"), url("/admin/style/vendor/SourceSansPro/SourceSansPro-Bold.ttf") format("truetype");
|
||||||
src: local("Source Sans Pro Bold"), local("SourceSansPro-Bold"),
|
|
||||||
url("/admin/style/vendor/SourceSansPro/source-sans-pro-v13-latin-700.woff2") format("woff2"),
|
|
||||||
url("/admin/style/vendor/SourceSansPro/source-sans-pro-v13-latin-700.woff") format("woff");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
body {
|
body {
|
||||||
@@ -174,7 +167,7 @@ h1 a {
|
|||||||
background-color: rgba(0,0,0,0.1);
|
background-color: rgba(0,0,0,0.1);
|
||||||
font-family: "Helvetica Neue", Helvetica, Arial ,sans-serif;
|
font-family: "Helvetica Neue", Helvetica, Arial ,sans-serif;
|
||||||
font-size: 2rem;
|
font-size: 2rem;
|
||||||
font-weight: 400;
|
font-weight: normal;
|
||||||
min-width: 230px;
|
min-width: 230px;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
}
|
}
|
||||||
@@ -190,11 +183,10 @@ header #bpAlt label {
|
|||||||
text-indent: 30px;
|
text-indent: 30px;
|
||||||
}
|
}
|
||||||
|
|
||||||
[type="checkbox"][id$="Toggle"] { display: none; }
|
[type=checkbox][id$="Toggle"] { display: none; }
|
||||||
[type="checkbox"][id$="Toggle"]:checked ~ #bpAbout,
|
[type=checkbox][id$="Toggle"]:checked ~ #bpAbout,
|
||||||
[type="checkbox"][id$="Toggle"]:checked ~ #bpMoreInfo {
|
[type=checkbox][id$="Toggle"]:checked ~ #bpMoreInfo {
|
||||||
display: block;
|
display: block; }
|
||||||
}
|
|
||||||
|
|
||||||
/* Click anywhere else on screen to hide #bpAbout */
|
/* Click anywhere else on screen to hide #bpAbout */
|
||||||
#bpAboutToggle:checked {
|
#bpAboutToggle:checked {
|
||||||
@@ -211,7 +203,7 @@ header #bpAlt label {
|
|||||||
#bpAbout {
|
#bpAbout {
|
||||||
background: #3c8dbc;
|
background: #3c8dbc;
|
||||||
border-bottom-left-radius: 5px;
|
border-bottom-left-radius: 5px;
|
||||||
border: 1px solid #fff;
|
border: 1px solid #FFF;
|
||||||
border-right-width: 0;
|
border-right-width: 0;
|
||||||
box-shadow: -1px 1px 1px rgba(0,0,0,0.12);
|
box-shadow: -1px 1px 1px rgba(0,0,0,0.12);
|
||||||
box-sizing: border-box;
|
box-sizing: border-box;
|
||||||
@@ -277,8 +269,8 @@ main {
|
|||||||
padding: 15px;
|
padding: 15px;
|
||||||
}
|
}
|
||||||
|
|
||||||
#bpOutput::before {
|
#bpOutput:before {
|
||||||
background: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='7' height='14' viewBox='0 0 7 14'%3E%3Cpath fill='%23fff' d='M6 11a1.371 1.371 0 011 1v1a1.371 1.371 0 01-1 1H1a1.371 1.371 0 01-1-1v-1a1.371 1.371 0 011-1h1V8H1a1.371 1.371 0 01-1-1V6a1.371 1.371 0 011-1h3a1.371 1.371 0 011 1v5h1zM3.5 0A1.5 1.5 0 112 1.5 1.5 1.5 0 013.5 0z'/%3E%3C/svg%3E") no-repeat center left;
|
background: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='7' height='14' viewBox='0 0 7 14'%3E%3Cpath fill='%23fff' d='M6,11a1.371,1.371,0,0,1,1,1v1a1.371,1.371,0,0,1-1,1H1a1.371,1.371,0,0,1-1-1V12a1.371,1.371,0,0,1,1-1H2V8H1A1.371,1.371,0,0,1,0,7V6A1.371,1.371,0,0,1,1,5H4A1.371,1.371,0,0,1,5,6v5H6ZM3.5,0A1.5,1.5,0,1,1,2,1.5,1.5,1.5,0,0,1,3.5,0Z'/%3E%3C/svg%3E") no-repeat center left;
|
||||||
display: block;
|
display: block;
|
||||||
font-size: 1.8rem;
|
font-size: 1.8rem;
|
||||||
text-indent: 15px;
|
text-indent: 15px;
|
||||||
@@ -289,8 +281,8 @@ main {
|
|||||||
#bpOutput.error { background: #dd4b39; }
|
#bpOutput.error { background: #dd4b39; }
|
||||||
|
|
||||||
.blockMsg, .flagMsg {
|
.blockMsg, .flagMsg {
|
||||||
font: 700 1.8rem Consolas, Courier, monospace;
|
font: bold 1.8rem Consolas, Courier, monospace;
|
||||||
padding: 5px 10px 10px;
|
padding: 5px 10px 10px 10px;
|
||||||
text-indent: 15px;
|
text-indent: 15px;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -325,7 +317,7 @@ main {
|
|||||||
/* Button hover dark overlay */
|
/* Button hover dark overlay */
|
||||||
.buttons *:not(input):not([disabled]):hover {
|
.buttons *:not(input):not([disabled]):hover {
|
||||||
background-image: linear-gradient(to bottom, rgba(0,0,0,0.1), rgba(0,0,0,0.1));
|
background-image: linear-gradient(to bottom, rgba(0,0,0,0.1), rgba(0,0,0,0.1));
|
||||||
color: #fff;
|
color: #FFF;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Button active shadow inset */
|
/* Button active shadow inset */
|
||||||
@@ -333,32 +325,30 @@ main {
|
|||||||
box-shadow: inset 0 3px 5px rgba(0,0,0,0.125);
|
box-shadow: inset 0 3px 5px rgba(0,0,0,0.125);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Input border color */
|
/* Input border colour */
|
||||||
.buttons *:not([disabled]):hover, .buttons input:focus {
|
.buttons *:not([disabled]):hover, .buttons input:focus {
|
||||||
border-color: rgba(0,0,0,0.25);
|
border-color: rgba(0,0,0,0.25);
|
||||||
}
|
}
|
||||||
|
|
||||||
#bpButtons * { width: 50%; color: #fff; }
|
#bpButtons * { width: 50%; color: #FFF; }
|
||||||
#bpBack { background-color: #00a65a; }
|
#bpBack { background-color: #00a65a; }
|
||||||
#bpInfo { background-color: #3c8dbc; }
|
#bpInfo { background-color: #3c8dbc; }
|
||||||
#bpWhitelist { background-color: #dd4b39; }
|
#bpWhitelist { background-color: #dd4b39; }
|
||||||
|
|
||||||
#blockpage .buttons [type="password"][disabled] { color: rgba(0, 0, 0, 1); }
|
#blockpage .buttons [type=password][disabled] { color: rgba(0,0,0,1); }
|
||||||
#blockpage .buttons [disabled] { color: rgba(0,0,0,0.55); background-color: #e3e3e3; }
|
#blockpage .buttons [disabled] { color: rgba(0,0,0,0.55); background-color: #e3e3e3; }
|
||||||
#blockpage .buttons [type="password"]:-ms-input-placeholder { color: rgba(51, 51, 51, 0.8); }
|
#blockpage .buttons [type=password]:-ms-input-placeholder { color: rgba(51,51,51,0.8); }
|
||||||
|
|
||||||
input[type="password"] { font-size: 1.5rem; }
|
input[type=password] { font-size: 1.5rem; }
|
||||||
|
|
||||||
@-webkit-keyframes slidein { from { max-height: 0; opacity: 0; } to { max-height: 300px; opacity: 1; } }
|
|
||||||
|
|
||||||
@keyframes slidein { from { max-height: 0; opacity: 0; } to { max-height: 300px; opacity: 1; } }
|
@keyframes slidein { from { max-height: 0; opacity: 0; } to { max-height: 300px; opacity: 1; } }
|
||||||
#bpMoreToggle:checked ~ #bpMoreInfo { display: block; margin-top: 8px; -webkit-animation: slidein 0.05s linear; animation: slidein 0.05s linear; }
|
#bpMoreToggle:checked ~ #bpMoreInfo { display: block; margin-top: 8px; animation: slidein 0.05s linear; }
|
||||||
#bpMoreInfo { display: none; margin-top: 10px; }
|
#bpMoreInfo { display: none; margin-top: 10px; }
|
||||||
|
|
||||||
#bpQueryOutput {
|
#bpQueryOutput {
|
||||||
font-size: 1.2rem;
|
font-size: 1.2rem;
|
||||||
line-height: 1.65rem;
|
line-height: 1.65rem;
|
||||||
margin: 5px 0 0;
|
margin: 5px 0 0 0;
|
||||||
overflow: auto;
|
overflow: auto;
|
||||||
padding: 0 5px;
|
padding: 0 5px;
|
||||||
-webkit-overflow-scrolling: touch;
|
-webkit-overflow-scrolling: touch;
|
||||||
@@ -383,7 +373,7 @@ footer {
|
|||||||
/* Responsive Content */
|
/* Responsive Content */
|
||||||
@media only screen and (max-width: 500px) {
|
@media only screen and (max-width: 500px) {
|
||||||
h1 a { font-size: 1.8rem; min-width: 170px; }
|
h1 a { font-size: 1.8rem; min-width: 170px; }
|
||||||
footer span::before { content: "Generated "; }
|
footer span:before { content: "Generated "; }
|
||||||
footer span { display: block; }
|
footer span { display: block; }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -46,7 +46,7 @@
|
|||||||
#resolv-file=
|
#resolv-file=
|
||||||
|
|
||||||
# By default, dnsmasq will send queries to any of the upstream
|
# By default, dnsmasq will send queries to any of the upstream
|
||||||
# servers it knows about and tries to favor servers to are known
|
# servers it knows about and tries to favour servers to are known
|
||||||
# to be up. Uncommenting this forces dnsmasq to try each query
|
# to be up. Uncommenting this forces dnsmasq to try each query
|
||||||
# with each server strictly in the order they appear in
|
# with each server strictly in the order they appear in
|
||||||
# /etc/resolv.conf
|
# /etc/resolv.conf
|
||||||
@@ -189,7 +189,7 @@
|
|||||||
# add names to the DNS for the IPv6 address of SLAAC-configured dual-stack
|
# add names to the DNS for the IPv6 address of SLAAC-configured dual-stack
|
||||||
# hosts. Use the DHCPv4 lease to derive the name, network segment and
|
# hosts. Use the DHCPv4 lease to derive the name, network segment and
|
||||||
# MAC address and assume that the host will also have an
|
# MAC address and assume that the host will also have an
|
||||||
# IPv6 address calculated using the SLAAC algorithm.
|
# IPv6 address calculated using the SLAAC alogrithm.
|
||||||
#dhcp-range=1234::, ra-names
|
#dhcp-range=1234::, ra-names
|
||||||
|
|
||||||
# Do Router Advertisements, BUT NOT DHCP for this subnet.
|
# Do Router Advertisements, BUT NOT DHCP for this subnet.
|
||||||
@@ -210,7 +210,7 @@
|
|||||||
#dhcp-range=1234::, ra-stateless, ra-names
|
#dhcp-range=1234::, ra-stateless, ra-names
|
||||||
|
|
||||||
# Do router advertisements for all subnets where we're doing DHCPv6
|
# Do router advertisements for all subnets where we're doing DHCPv6
|
||||||
# Unless overridden by ra-stateless, ra-names, et al, the router
|
# Unless overriden by ra-stateless, ra-names, et al, the router
|
||||||
# advertisements will have the M and O bits set, so that the clients
|
# advertisements will have the M and O bits set, so that the clients
|
||||||
# get addresses and configuration from DHCPv6, and the A bit reset, so the
|
# get addresses and configuration from DHCPv6, and the A bit reset, so the
|
||||||
# clients don't use SLAAC addresses.
|
# clients don't use SLAAC addresses.
|
||||||
@@ -281,7 +281,7 @@
|
|||||||
# Give a fixed IPv6 address and name to client with
|
# Give a fixed IPv6 address and name to client with
|
||||||
# DUID 00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2
|
# DUID 00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2
|
||||||
# Note the MAC addresses CANNOT be used to identify DHCPv6 clients.
|
# Note the MAC addresses CANNOT be used to identify DHCPv6 clients.
|
||||||
# Note also the they [] around the IPv6 address are obligatory.
|
# Note also the they [] around the IPv6 address are obilgatory.
|
||||||
#dhcp-host=id:00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2, fred, [1234::5]
|
#dhcp-host=id:00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2, fred, [1234::5]
|
||||||
|
|
||||||
# Ignore any clients which are not specified in dhcp-host lines
|
# Ignore any clients which are not specified in dhcp-host lines
|
||||||
@@ -404,14 +404,14 @@
|
|||||||
#dhcp-option=vendor:MSFT,2,1i
|
#dhcp-option=vendor:MSFT,2,1i
|
||||||
|
|
||||||
# Send the Encapsulated-vendor-class ID needed by some configurations of
|
# Send the Encapsulated-vendor-class ID needed by some configurations of
|
||||||
# Etherboot to allow is to recognize the DHCP server.
|
# Etherboot to allow is to recognise the DHCP server.
|
||||||
#dhcp-option=vendor:Etherboot,60,"Etherboot"
|
#dhcp-option=vendor:Etherboot,60,"Etherboot"
|
||||||
|
|
||||||
# Send options to PXELinux. Note that we need to send the options even
|
# Send options to PXELinux. Note that we need to send the options even
|
||||||
# though they don't appear in the parameter request list, so we need
|
# though they don't appear in the parameter request list, so we need
|
||||||
# to use dhcp-option-force here.
|
# to use dhcp-option-force here.
|
||||||
# See http://syslinux.zytor.com/pxe.php#special for details.
|
# See http://syslinux.zytor.com/pxe.php#special for details.
|
||||||
# Magic number - needed before anything else is recognized
|
# Magic number - needed before anything else is recognised
|
||||||
#dhcp-option-force=208,f1:00:74:7e
|
#dhcp-option-force=208,f1:00:74:7e
|
||||||
# Configuration file name
|
# Configuration file name
|
||||||
#dhcp-option-force=209,configs/common
|
#dhcp-option-force=209,configs/common
|
||||||
|
@@ -6,8 +6,8 @@
|
|||||||
* This file is copyright under the latest version of the EUPL.
|
* This file is copyright under the latest version of the EUPL.
|
||||||
* Please see LICENSE file for your rights under this license. */
|
* Please see LICENSE file for your rights under this license. */
|
||||||
|
|
||||||
// Sanitize SERVER_NAME output
|
// Sanitise HTTP_HOST output
|
||||||
$serverName = htmlspecialchars($_SERVER["SERVER_NAME"]);
|
$serverName = htmlspecialchars($_SERVER["HTTP_HOST"]);
|
||||||
// Remove external ipv6 brackets if any
|
// Remove external ipv6 brackets if any
|
||||||
$serverName = preg_replace('/^\[(.*)\]$/', '${1}', $serverName);
|
$serverName = preg_replace('/^\[(.*)\]$/', '${1}', $serverName);
|
||||||
|
|
||||||
@@ -41,7 +41,7 @@ $validExtTypes = array("asp", "htm", "html", "php", "rss", "xml", "");
|
|||||||
$currentUrlExt = pathinfo($_SERVER["REQUEST_URI"], PATHINFO_EXTENSION);
|
$currentUrlExt = pathinfo($_SERVER["REQUEST_URI"], PATHINFO_EXTENSION);
|
||||||
|
|
||||||
// Set mobile friendly viewport
|
// Set mobile friendly viewport
|
||||||
$viewPort = '<meta name="viewport" content="width=device-width, initial-scale=1">';
|
$viewPort = '<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"/>';
|
||||||
|
|
||||||
// Set response header
|
// Set response header
|
||||||
function setHeader($type = "x") {
|
function setHeader($type = "x") {
|
||||||
@@ -50,29 +50,16 @@ function setHeader($type = "x") {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Determine block page type
|
// Determine block page type
|
||||||
if ($serverName === "pi.hole"
|
if ($serverName === "pi.hole") {
|
||||||
|| (!empty($_SERVER["VIRTUAL_HOST"]) && $serverName === $_SERVER["VIRTUAL_HOST"])) {
|
|
||||||
// Redirect to Web Interface
|
// Redirect to Web Interface
|
||||||
exit(header("Location: /admin"));
|
exit(header("Location: /admin"));
|
||||||
} elseif (filter_var($serverName, FILTER_VALIDATE_IP) || in_array($serverName, $authorizedHosts)) {
|
} elseif (filter_var($serverName, FILTER_VALIDATE_IP) || in_array($serverName, $authorizedHosts)) {
|
||||||
// Set Splash Page output
|
// Set Splash Page output
|
||||||
$splashPage = "
|
$splashPage = "
|
||||||
<!doctype html>
|
<html><head>
|
||||||
<html lang='en'>
|
|
||||||
<head>
|
|
||||||
<meta charset='utf-8'>
|
|
||||||
$viewPort
|
$viewPort
|
||||||
<title>● $serverName</title>
|
<link rel='stylesheet' href='/pihole/blockingpage.css' type='text/css'/>
|
||||||
<link rel='stylesheet' href='pihole/blockingpage.css'>
|
</head><body id='splashpage'><img src='/admin/img/logo.svg'/><br/>Pi-<b>hole</b>: Your black hole for Internet advertisements<br><a href='/admin'>Did you mean to go to the admin panel?</a></body></html>
|
||||||
<link rel='shortcut icon' href='admin/img/favicons/favicon.ico' type='image/x-icon'>
|
|
||||||
</head>
|
|
||||||
<body id='splashpage'>
|
|
||||||
<img src='admin/img/logo.svg' alt='Pi-hole logo' width='256' height='377'>
|
|
||||||
<br>
|
|
||||||
<p>Pi-<strong>hole</strong>: Your black hole for Internet advertisements</p>
|
|
||||||
<a href='/admin'>Did you mean to go to the admin panel?</a>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
";
|
";
|
||||||
|
|
||||||
// Set splash/landing page based off presence of $landPage
|
// Set splash/landing page based off presence of $landPage
|
||||||
@@ -81,42 +68,25 @@ if ($serverName === "pi.hole"
|
|||||||
// Unset variables so as to not be included in $landPage
|
// Unset variables so as to not be included in $landPage
|
||||||
unset($serverName, $svPasswd, $svEmail, $authorizedHosts, $validExtTypes, $currentUrlExt, $viewPort);
|
unset($serverName, $svPasswd, $svEmail, $authorizedHosts, $validExtTypes, $currentUrlExt, $viewPort);
|
||||||
|
|
||||||
// Render splash/landing page when directly browsing via IP or authorized hostname
|
// Render splash/landing page when directly browsing via IP or authorised hostname
|
||||||
exit($renderPage);
|
exit($renderPage);
|
||||||
} elseif ($currentUrlExt === "js") {
|
} elseif ($currentUrlExt === "js") {
|
||||||
// Serve Pi-hole JavaScript for blocked domains requesting JS
|
// Serve Pi-hole Javascript for blocked domains requesting JS
|
||||||
exit(setHeader("js").'var x = "Pi-hole: A black hole for Internet advertisements."');
|
exit(setHeader("js").'var x = "Pi-hole: A black hole for Internet advertisements."');
|
||||||
} elseif (strpos($_SERVER["REQUEST_URI"], "?") !== FALSE && isset($_SERVER["HTTP_REFERER"])) {
|
} elseif (strpos($_SERVER["REQUEST_URI"], "?") !== FALSE && isset($_SERVER["HTTP_REFERER"])) {
|
||||||
// Serve blank image upon receiving REQUEST_URI w/ query string & HTTP_REFERRER
|
// Serve blank image upon receiving REQUEST_URI w/ query string & HTTP_REFERRER
|
||||||
// e.g: An iframe of a blocked domain
|
// e.g: An iframe of a blocked domain
|
||||||
exit(setHeader().'<!doctype html>
|
exit(setHeader().'<html>
|
||||||
<html lang="en">
|
<head><script>window.close();</script></head>
|
||||||
<head>
|
<body><img src="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs="></body>
|
||||||
<meta charset="utf-8"><script>window.close();</script>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<img src="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=">
|
|
||||||
</body>
|
|
||||||
</html>');
|
</html>');
|
||||||
} elseif (!in_array($currentUrlExt, $validExtTypes) || substr_count($_SERVER["REQUEST_URI"], "?")) {
|
} elseif (!in_array($currentUrlExt, $validExtTypes) || substr_count($_SERVER["REQUEST_URI"], "?")) {
|
||||||
// Serve SVG upon receiving non $validExtTypes URL extension or query string
|
// Serve SVG upon receiving non $validExtTypes URL extension or query string
|
||||||
// e.g: Not an iframe of a blocked domain, such as when browsing to a file/query directly
|
// e.g: Not an iframe of a blocked domain, such as when browsing to a file/query directly
|
||||||
// QoL addition: Allow the SVG to be clicked on in order to quickly show the full Block Page
|
// QoL addition: Allow the SVG to be clicked on in order to quickly show the full Block Page
|
||||||
$blockImg = '<a href="/">
|
$blockImg = '<a href="/"><svg xmlns="http://www.w3.org/2000/svg" width="110" height="16"><defs><style>a {text-decoration: none;} circle {stroke: rgba(152,2,2,0.5); fill: none; stroke-width: 2;} rect {fill: rgba(152,2,2,0.5);} text {opacity: 0.3; font: 11px Arial;}</style></defs><circle cx="8" cy="8" r="7"/><rect x="10.3" y="-6" width="2" height="12" transform="rotate(45)"/><text x="19.3" y="12">Blocked by Pi-hole</text></svg></a>';
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="110" height="16">
|
exit(setHeader()."<html>
|
||||||
<circle cx="8" cy="8" r="7" fill="none" stroke="rgba(152,2,2,.5)" stroke-width="2"/>
|
<head>$viewPort</head>
|
||||||
<path fill="rgba(152,2,2,.5)" d="M11.526 3.04l1.414 1.415-8.485 8.485-1.414-1.414z"/>
|
|
||||||
<text x="19.3" y="12" opacity=".3" style="font:11px Arial">
|
|
||||||
Blocked by Pi-hole
|
|
||||||
</text>
|
|
||||||
</svg>
|
|
||||||
</a>';
|
|
||||||
exit(setHeader()."<!doctype html>
|
|
||||||
<html lang='en'>
|
|
||||||
<head>
|
|
||||||
<meta charset='utf-8'>
|
|
||||||
$viewPort
|
|
||||||
</head>
|
|
||||||
<body>$blockImg</body>
|
<body>$blockImg</body>
|
||||||
</html>");
|
</html>");
|
||||||
}
|
}
|
||||||
@@ -126,30 +96,26 @@ if ($serverName === "pi.hole"
|
|||||||
// Define admin email address text based off $svEmail presence
|
// Define admin email address text based off $svEmail presence
|
||||||
$bpAskAdmin = !empty($svEmail) ? '<a href="mailto:'.$svEmail.'?subject=Site Blocked: '.$serverName.'"></a>' : "<span/>";
|
$bpAskAdmin = !empty($svEmail) ? '<a href="mailto:'.$svEmail.'?subject=Site Blocked: '.$serverName.'"></a>' : "<span/>";
|
||||||
|
|
||||||
// Get possible non-standard location of FTL's database
|
// Determine if at least one block list has been generated
|
||||||
$FTLsettings = parse_ini_file("/etc/pihole/pihole-FTL.conf");
|
$blocklistglob = glob("/etc/pihole/list.0.*.domains");
|
||||||
if (isset($FTLsettings["GRAVITYDB"])) {
|
if ($blocklistglob === array()) {
|
||||||
$gravityDBFile = $FTLsettings["GRAVITYDB"];
|
die("[ERROR] There are no domain lists generated lists within <code>/etc/pihole/</code>! Please update gravity by running <code>pihole -g</code>, or repair Pi-hole using <code>pihole -r</code>.");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set location of adlists file
|
||||||
|
if (is_file("/etc/pihole/adlists.list")) {
|
||||||
|
$adLists = "/etc/pihole/adlists.list";
|
||||||
|
} elseif (is_file("/etc/pihole/adlists.default")) {
|
||||||
|
$adLists = "/etc/pihole/adlists.default";
|
||||||
} else {
|
} else {
|
||||||
$gravityDBFile = "/etc/pihole/gravity.db";
|
die("[ERROR] File not found: <code>/etc/pihole/adlists.list</code>");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Connect to gravity.db
|
// Get all URLs starting with "http" or "www" from adlists and re-index array numerically
|
||||||
try {
|
$adlistsUrls = array_values(preg_grep("/(^http)|(^www)/i", file($adLists, FILE_IGNORE_NEW_LINES)));
|
||||||
$db = new SQLite3($gravityDBFile, SQLITE3_OPEN_READONLY);
|
|
||||||
} catch (Exception $exception) {
|
|
||||||
die("[ERROR]: Failed to connect to gravity.db");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get all adlist addresses
|
|
||||||
$adlistResults = $db->query("SELECT address FROM vw_adlist");
|
|
||||||
$adlistsUrls = array();
|
|
||||||
while ($row = $adlistResults->fetchArray()) {
|
|
||||||
array_push($adlistsUrls, $row[0]);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (empty($adlistsUrls))
|
if (empty($adlistsUrls))
|
||||||
die("[ERROR]: There are no adlists enabled");
|
die("[ERROR]: There are no adlist URL's found within <code>$adLists</code>");
|
||||||
|
|
||||||
// Get total number of blocklists (Including Whitelist, Blacklist & Wildcard lists)
|
// Get total number of blocklists (Including Whitelist, Blacklist & Wildcard lists)
|
||||||
$adlistsCount = count($adlistsUrls) + 3;
|
$adlistsCount = count($adlistsUrls) + 3;
|
||||||
@@ -161,12 +127,7 @@ ini_set("default_socket_timeout", 3);
|
|||||||
function queryAds($serverName) {
|
function queryAds($serverName) {
|
||||||
// Determine the time it takes while querying adlists
|
// Determine the time it takes while querying adlists
|
||||||
$preQueryTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
|
$preQueryTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
|
||||||
$queryAdsURL = sprintf(
|
$queryAds = file("http://127.0.0.1/admin/scripts/pi-hole/php/queryads.php?domain=$serverName&bp", FILE_IGNORE_NEW_LINES);
|
||||||
"http://127.0.0.1:%s/admin/scripts/pi-hole/php/queryads.php?domain=%s&bp",
|
|
||||||
$_SERVER["SERVER_PORT"],
|
|
||||||
$serverName
|
|
||||||
);
|
|
||||||
$queryAds = file($queryAdsURL, FILE_IGNORE_NEW_LINES);
|
|
||||||
$queryAds = array_values(array_filter(preg_replace("/data:\s+/", "", $queryAds)));
|
$queryAds = array_values(array_filter(preg_replace("/data:\s+/", "", $queryAds)));
|
||||||
$queryTime = sprintf("%.0f", (microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]) - $preQueryTime);
|
$queryTime = sprintf("%.0f", (microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]) - $preQueryTime);
|
||||||
|
|
||||||
@@ -244,12 +205,12 @@ $phVersion = exec("cd /etc/.pihole/ && git describe --long --tags");
|
|||||||
if (explode("-", $phVersion)[1] != "0")
|
if (explode("-", $phVersion)[1] != "0")
|
||||||
$execTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
|
$execTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
|
||||||
|
|
||||||
// Please Note: Text is added via CSS to allow an admin to provide a localized
|
// Please Note: Text is added via CSS to allow an admin to provide a localised
|
||||||
// language without the need to edit this file
|
// language without the need to edit this file
|
||||||
|
|
||||||
setHeader();
|
setHeader();
|
||||||
?>
|
?>
|
||||||
<!doctype html>
|
<!DOCTYPE html>
|
||||||
<!-- Pi-hole: A black hole for Internet advertisements
|
<!-- Pi-hole: A black hole for Internet advertisements
|
||||||
* (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
* (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
||||||
* Network-wide ad blocking via your own hardware.
|
* Network-wide ad blocking via your own hardware.
|
||||||
@@ -257,14 +218,14 @@ setHeader();
|
|||||||
* This file is copyright under the latest version of the EUPL. -->
|
* This file is copyright under the latest version of the EUPL. -->
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
<meta charset="utf-8">
|
<meta charset="UTF-8">
|
||||||
<?=$viewPort ?>
|
<?=$viewPort ?>
|
||||||
<meta name="robots" content="noindex,nofollow">
|
<meta name="robots" content="noindex,nofollow"/>
|
||||||
<meta http-equiv="x-dns-prefetch-control" content="off">
|
<meta http-equiv="x-dns-prefetch-control" content="off">
|
||||||
<link rel="stylesheet" href="pihole/blockingpage.css">
|
<link rel="shortcut icon" href="//pi.hole/admin/img/favicon.png" type="image/x-icon"/>
|
||||||
<link rel="shortcut icon" href="admin/img/favicons/favicon.ico" type="image/x-icon">
|
<link rel="stylesheet" href="//pi.hole/pihole/blockingpage.css" type="text/css"/>
|
||||||
<title>● <?=$serverName ?></title>
|
<title>● <?=$serverName ?></title>
|
||||||
<script src="admin/scripts/vendor/jquery.min.js"></script>
|
<script src="//pi.hole/admin/scripts/vendor/jquery.min.js"></script>
|
||||||
<script>
|
<script>
|
||||||
window.onload = function () {
|
window.onload = function () {
|
||||||
<?php
|
<?php
|
||||||
@@ -296,16 +257,16 @@ setHeader();
|
|||||||
</h1>
|
</h1>
|
||||||
<div class="spc"></div>
|
<div class="spc"></div>
|
||||||
|
|
||||||
<input id="bpAboutToggle" type="checkbox">
|
<input id="bpAboutToggle" type="checkbox"/>
|
||||||
<div id="bpAbout">
|
<div id="bpAbout">
|
||||||
<div class="aboutPH">
|
<div class="aboutPH">
|
||||||
<div class="aboutImg"></div>
|
<div class="aboutImg"/></div>
|
||||||
<p>Open Source Ad Blocker
|
<p>Open Source Ad Blocker
|
||||||
<small>Designed for Raspberry Pi</small>
|
<small>Designed for Raspberry Pi</small>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="aboutLink">
|
<div class="aboutLink">
|
||||||
<a class="linkPH" href="https://docs.pi-hole.net/"><?php //About PH ?></a>
|
<a class="linkPH" href="https://github.com/pi-hole/pi-hole/wiki/What-is-Pi-hole%3F-A-simple-explanation"><?php //About PH ?></a>
|
||||||
<?php if (!empty($svEmail)) echo '<a class="linkEmail" href="mailto:'.$svEmail.'"></a>'; ?>
|
<?php if (!empty($svEmail)) echo '<a class="linkEmail" href="mailto:'.$svEmail.'"></a>'; ?>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -336,9 +297,8 @@ setHeader();
|
|||||||
<pre id='bpQueryOutput'><?php if ($featuredTotal > 0) foreach ($queryResults as $num => $value) { echo "<span>[$num]:</span>$adlistsUrls[$num]\n"; } ?></pre>
|
<pre id='bpQueryOutput'><?php if ($featuredTotal > 0) foreach ($queryResults as $num => $value) { echo "<span>[$num]:</span>$adlistsUrls[$num]\n"; } ?></pre>
|
||||||
|
|
||||||
<form id="bpWLButtons" class="buttons">
|
<form id="bpWLButtons" class="buttons">
|
||||||
<input id="bpWLDomain" type="text" value="<?=$serverName ?>" disabled>
|
<input id="bpWLDomain" type="text" value="<?=$serverName ?>" disabled/>
|
||||||
<input id="bpWLPassword" type="password" placeholder="JavaScript disabled" disabled>
|
<input id="bpWLPassword" type="password" placeholder="Javascript disabled" disabled/><button id="bpWhitelist" type="button" disabled></button>
|
||||||
<button id="bpWhitelist" type="button" disabled></button>
|
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
</main>
|
</main>
|
||||||
|
@@ -27,10 +27,10 @@ server.modules = (
|
|||||||
)
|
)
|
||||||
|
|
||||||
server.document-root = "/var/www/html"
|
server.document-root = "/var/www/html"
|
||||||
server.error-handler-404 = "/pihole/index.php"
|
server.error-handler-404 = "pihole/index.php"
|
||||||
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
|
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
|
||||||
server.errorlog = "/var/log/lighttpd/error.log"
|
server.errorlog = "/var/log/lighttpd/error.log"
|
||||||
server.pid-file = "/run/lighttpd.pid"
|
server.pid-file = "/var/run/lighttpd.pid"
|
||||||
server.username = "www-data"
|
server.username = "www-data"
|
||||||
server.groupname = "www-data"
|
server.groupname = "www-data"
|
||||||
server.port = 80
|
server.port = 80
|
||||||
@@ -42,44 +42,17 @@ url.access-deny = ( "~", ".inc", ".md", ".yml", ".ini" )
|
|||||||
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
|
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
|
||||||
|
|
||||||
compress.cache-dir = "/var/cache/lighttpd/compress/"
|
compress.cache-dir = "/var/cache/lighttpd/compress/"
|
||||||
compress.filetype = (
|
compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" )
|
||||||
"application/json",
|
|
||||||
"application/vnd.ms-fontobject",
|
|
||||||
"application/xml",
|
|
||||||
"font/eot",
|
|
||||||
"font/opentype",
|
|
||||||
"font/otf",
|
|
||||||
"font/ttf",
|
|
||||||
"image/bmp",
|
|
||||||
"image/svg+xml",
|
|
||||||
"image/vnd.microsoft.icon",
|
|
||||||
"image/x-icon",
|
|
||||||
"text/css",
|
|
||||||
"text/html",
|
|
||||||
"text/javascript",
|
|
||||||
"text/plain",
|
|
||||||
"text/xml"
|
|
||||||
)
|
|
||||||
|
|
||||||
mimetype.assign = (
|
mimetype.assign = ( ".png" => "image/png",
|
||||||
".ico" => "image/x-icon",
|
|
||||||
".jpeg" => "image/jpeg",
|
|
||||||
".jpg" => "image/jpeg",
|
".jpg" => "image/jpeg",
|
||||||
".png" => "image/png",
|
".jpeg" => "image/jpeg",
|
||||||
".svg" => "image/svg+xml",
|
".html" => "text/html",
|
||||||
".css" => "text/css; charset=utf-8",
|
".css" => "text/css; charset=utf-8",
|
||||||
".html" => "text/html; charset=utf-8",
|
".js" => "application/javascript",
|
||||||
".js" => "text/javascript; charset=utf-8",
|
".json" => "application/json",
|
||||||
".json" => "application/json; charset=utf-8",
|
".txt" => "text/plain",
|
||||||
".map" => "application/json; charset=utf-8",
|
".svg" => "image/svg+xml" )
|
||||||
".txt" => "text/plain; charset=utf-8",
|
|
||||||
".eot" => "application/vnd.ms-fontobject",
|
|
||||||
".otf" => "font/otf",
|
|
||||||
".ttc" => "font/collection",
|
|
||||||
".ttf" => "font/ttf",
|
|
||||||
".woff" => "font/woff",
|
|
||||||
".woff2" => "font/woff2"
|
|
||||||
)
|
|
||||||
|
|
||||||
# default listening port for IPv6 falls back to the IPv4 port
|
# default listening port for IPv6 falls back to the IPv4 port
|
||||||
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
|
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
|
||||||
@@ -96,7 +69,7 @@ $HTTP["url"] =~ "^/admin/" {
|
|||||||
"X-Frame-Options" => "DENY"
|
"X-Frame-Options" => "DENY"
|
||||||
)
|
)
|
||||||
|
|
||||||
$HTTP["url"] =~ "\.(eot|otf|tt[cf]|woff2?)$" {
|
$HTTP["url"] =~ ".ttf$" {
|
||||||
# Allow Block Page access to local fonts
|
# Allow Block Page access to local fonts
|
||||||
setenv.add-response-header = ( "Access-Control-Allow-Origin" => "*" )
|
setenv.add-response-header = ( "Access-Control-Allow-Origin" => "*" )
|
||||||
}
|
}
|
||||||
@@ -107,9 +80,6 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
|
|||||||
url.access-deny = ("")
|
url.access-deny = ("")
|
||||||
}
|
}
|
||||||
|
|
||||||
# Default expire header
|
|
||||||
expire.url = ( "" => "access plus 0 seconds" )
|
|
||||||
|
|
||||||
# Add user chosen options held in external file
|
# Add user chosen options held in external file
|
||||||
# This uses include_shell instead of an include wildcard for compatibility
|
# This uses include_shell instead of an include wildcard for compatibility
|
||||||
include_shell "cat external.conf 2>/dev/null"
|
include_shell "cat external.conf 2>/dev/null"
|
||||||
|
@@ -2,7 +2,7 @@
|
|||||||
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
||||||
# Network-wide ad blocking via your own hardware.
|
# Network-wide ad blocking via your own hardware.
|
||||||
#
|
#
|
||||||
# Lighttpd config for Pi-hole
|
# lighttpd config for Pi-hole
|
||||||
#
|
#
|
||||||
# This file is copyright under the latest version of the EUPL.
|
# This file is copyright under the latest version of the EUPL.
|
||||||
# Please see LICENSE file for your rights under this license.
|
# Please see LICENSE file for your rights under this license.
|
||||||
@@ -18,9 +18,9 @@
|
|||||||
server.modules = (
|
server.modules = (
|
||||||
"mod_access",
|
"mod_access",
|
||||||
"mod_auth",
|
"mod_auth",
|
||||||
"mod_expire",
|
|
||||||
"mod_fastcgi",
|
"mod_fastcgi",
|
||||||
"mod_accesslog",
|
"mod_accesslog",
|
||||||
|
"mod_expire",
|
||||||
"mod_compress",
|
"mod_compress",
|
||||||
"mod_redirect",
|
"mod_redirect",
|
||||||
"mod_setenv",
|
"mod_setenv",
|
||||||
@@ -28,68 +28,42 @@ server.modules = (
|
|||||||
)
|
)
|
||||||
|
|
||||||
server.document-root = "/var/www/html"
|
server.document-root = "/var/www/html"
|
||||||
server.error-handler-404 = "/pihole/index.php"
|
server.error-handler-404 = "pihole/index.php"
|
||||||
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
|
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
|
||||||
server.errorlog = "/var/log/lighttpd/error.log"
|
server.errorlog = "/var/log/lighttpd/error.log"
|
||||||
server.pid-file = "/run/lighttpd.pid"
|
server.pid-file = "/var/run/lighttpd.pid"
|
||||||
server.username = "lighttpd"
|
server.username = "lighttpd"
|
||||||
server.groupname = "lighttpd"
|
server.groupname = "lighttpd"
|
||||||
server.port = 80
|
server.port = 80
|
||||||
accesslog.filename = "/var/log/lighttpd/access.log"
|
accesslog.filename = "/var/log/lighttpd/access.log"
|
||||||
accesslog.format = "%{%s}t|%V|%r|%s|%b"
|
accesslog.format = "%{%s}t|%V|%r|%s|%b"
|
||||||
|
|
||||||
|
|
||||||
index-file.names = ( "index.php", "index.html", "index.lighttpd.html" )
|
index-file.names = ( "index.php", "index.html", "index.lighttpd.html" )
|
||||||
url.access-deny = ( "~", ".inc", ".md", ".yml", ".ini" )
|
url.access-deny = ( "~", ".inc", ".md", ".yml", ".ini" )
|
||||||
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
|
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
|
||||||
|
|
||||||
compress.cache-dir = "/var/cache/lighttpd/compress/"
|
compress.cache-dir = "/var/cache/lighttpd/compress/"
|
||||||
compress.filetype = (
|
compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" )
|
||||||
"application/json",
|
|
||||||
"application/vnd.ms-fontobject",
|
|
||||||
"application/xml",
|
|
||||||
"font/eot",
|
|
||||||
"font/opentype",
|
|
||||||
"font/otf",
|
|
||||||
"font/ttf",
|
|
||||||
"image/bmp",
|
|
||||||
"image/svg+xml",
|
|
||||||
"image/vnd.microsoft.icon",
|
|
||||||
"image/x-icon",
|
|
||||||
"text/css",
|
|
||||||
"text/html",
|
|
||||||
"text/javascript",
|
|
||||||
"text/plain",
|
|
||||||
"text/xml"
|
|
||||||
)
|
|
||||||
|
|
||||||
mimetype.assign = (
|
mimetype.assign = ( ".png" => "image/png",
|
||||||
".ico" => "image/x-icon",
|
|
||||||
".jpeg" => "image/jpeg",
|
|
||||||
".jpg" => "image/jpeg",
|
".jpg" => "image/jpeg",
|
||||||
".png" => "image/png",
|
".jpeg" => "image/jpeg",
|
||||||
".svg" => "image/svg+xml",
|
".html" => "text/html",
|
||||||
".css" => "text/css; charset=utf-8",
|
".css" => "text/css; charset=utf-8",
|
||||||
".html" => "text/html; charset=utf-8",
|
".js" => "application/javascript",
|
||||||
".js" => "text/javascript; charset=utf-8",
|
".json" => "application/json",
|
||||||
".json" => "application/json; charset=utf-8",
|
".txt" => "text/plain",
|
||||||
".map" => "application/json; charset=utf-8",
|
".svg" => "image/svg+xml" )
|
||||||
".txt" => "text/plain; charset=utf-8",
|
|
||||||
".eot" => "application/vnd.ms-fontobject",
|
|
||||||
".otf" => "font/otf",
|
|
||||||
".ttc" => "font/collection",
|
|
||||||
".ttf" => "font/ttf",
|
|
||||||
".woff" => "font/woff",
|
|
||||||
".woff2" => "font/woff2"
|
|
||||||
)
|
|
||||||
|
|
||||||
# default listening port for IPv6 falls back to the IPv4 port
|
# default listening port for IPv6 falls back to the IPv4 port
|
||||||
#include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
|
#include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
|
||||||
#include_shell "/usr/share/lighttpd/create-mime.assign.pl"
|
#include_shell "/usr/share/lighttpd/create-mime.assign.pl"
|
||||||
#include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
|
#include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
|
||||||
|
|
||||||
fastcgi.server = (
|
fastcgi.server = ( ".php" =>
|
||||||
".php" => (
|
( "localhost" =>
|
||||||
"localhost" => (
|
(
|
||||||
"socket" => "/tmp/php-fastcgi.socket",
|
"socket" => "/tmp/php-fastcgi.socket",
|
||||||
"bin-path" => "/usr/bin/php-cgi"
|
"bin-path" => "/usr/bin/php-cgi"
|
||||||
)
|
)
|
||||||
@@ -104,7 +78,7 @@ $HTTP["url"] =~ "^/admin/" {
|
|||||||
"X-Frame-Options" => "DENY"
|
"X-Frame-Options" => "DENY"
|
||||||
)
|
)
|
||||||
|
|
||||||
$HTTP["url"] =~ "\.(eot|otf|tt[cf]|woff2?)$" {
|
$HTTP["url"] =~ ".ttf$" {
|
||||||
# Allow Block Page access to local fonts
|
# Allow Block Page access to local fonts
|
||||||
setenv.add-response-header = ( "Access-Control-Allow-Origin" => "*" )
|
setenv.add-response-header = ( "Access-Control-Allow-Origin" => "*" )
|
||||||
}
|
}
|
||||||
@@ -115,9 +89,6 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
|
|||||||
url.access-deny = ("")
|
url.access-deny = ("")
|
||||||
}
|
}
|
||||||
|
|
||||||
# Default expire header
|
|
||||||
expire.url = ( "" => "access plus 0 seconds" )
|
|
||||||
|
|
||||||
# Add user chosen options held in external file
|
# Add user chosen options held in external file
|
||||||
# This uses include_shell instead of an include wildcard for compatibility
|
# This uses include_shell instead of an include wildcard for compatibility
|
||||||
include_shell "cat external.conf 2>/dev/null"
|
include_shell "cat external.conf 2>/dev/null"
|
||||||
|
File diff suppressed because it is too large
Load Diff
@@ -14,8 +14,8 @@ while true; do
|
|||||||
read -rp " ${QST} Are you sure you would like to remove ${COL_WHITE}Pi-hole${COL_NC}? [y/N] " yn
|
read -rp " ${QST} Are you sure you would like to remove ${COL_WHITE}Pi-hole${COL_NC}? [y/N] " yn
|
||||||
case ${yn} in
|
case ${yn} in
|
||||||
[Yy]* ) break;;
|
[Yy]* ) break;;
|
||||||
[Nn]* ) echo -e "${OVER} ${COL_LIGHT_GREEN}Uninstall has been canceled${COL_NC}"; exit 0;;
|
[Nn]* ) echo -e "${OVER} ${COL_LIGHT_GREEN}Uninstall has been cancelled${COL_NC}"; exit 0;;
|
||||||
* ) echo -e "${OVER} ${COL_LIGHT_GREEN}Uninstall has been canceled${COL_NC}"; exit 0;;
|
* ) echo -e "${OVER} ${COL_LIGHT_GREEN}Uninstall has been cancelled${COL_NC}"; exit 0;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -52,16 +52,16 @@ if [[ "${INSTALL_WEB_SERVER}" == true ]]; then
|
|||||||
DEPS+=("${PIHOLE_WEB_DEPS[@]}")
|
DEPS+=("${PIHOLE_WEB_DEPS[@]}")
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Compatibility
|
# Compatability
|
||||||
if [ -x "$(command -v apt-get)" ]; then
|
if [ -x "$(command -v apt-get)" ]; then
|
||||||
# Debian Family
|
# Debian Family
|
||||||
PKG_REMOVE=("${PKG_MANAGER}" -y remove --purge)
|
PKG_REMOVE="${PKG_MANAGER} -y remove --purge"
|
||||||
package_check() {
|
package_check() {
|
||||||
dpkg-query -W -f='${Status}' "$1" 2>/dev/null | grep -c "ok installed"
|
dpkg-query -W -f='${Status}' "$1" 2>/dev/null | grep -c "ok installed"
|
||||||
}
|
}
|
||||||
elif [ -x "$(command -v rpm)" ]; then
|
elif [ -x "$(command -v rpm)" ]; then
|
||||||
# Fedora Family
|
# Fedora Family
|
||||||
PKG_REMOVE=("${PKG_MANAGER}" remove -y)
|
PKG_REMOVE="${PKG_MANAGER} remove -y"
|
||||||
package_check() {
|
package_check() {
|
||||||
rpm -qa | grep "^$1-" > /dev/null
|
rpm -qa | grep "^$1-" > /dev/null
|
||||||
}
|
}
|
||||||
@@ -80,7 +80,7 @@ removeAndPurge() {
|
|||||||
case ${yn} in
|
case ${yn} in
|
||||||
[Yy]* )
|
[Yy]* )
|
||||||
echo -ne " ${INFO} Removing ${i}...";
|
echo -ne " ${INFO} Removing ${i}...";
|
||||||
${SUDO} "${PKG_REMOVE[@]}" "${i}" &> /dev/null;
|
${SUDO} "${PKG_REMOVE} ${i}" &> /dev/null;
|
||||||
echo -e "${OVER} ${INFO} Removed ${i}";
|
echo -e "${OVER} ${INFO} Removed ${i}";
|
||||||
break;;
|
break;;
|
||||||
[Nn]* ) echo -e " ${INFO} Skipped ${i}"; break;;
|
[Nn]* ) echo -e " ${INFO} Skipped ${i}"; break;;
|
||||||
@@ -132,15 +132,12 @@ removeNoPurge() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if package_check lighttpd > /dev/null; then
|
if package_check lighttpd > /dev/null; then
|
||||||
if [[ -f /etc/lighttpd/lighttpd.conf.orig ]]; then
|
${SUDO} rm -rf /etc/lighttpd/ &> /dev/null
|
||||||
|
echo -e " ${TICK} Removed lighttpd"
|
||||||
|
else
|
||||||
|
if [ -f /etc/lighttpd/lighttpd.conf.orig ]; then
|
||||||
${SUDO} mv /etc/lighttpd/lighttpd.conf.orig /etc/lighttpd/lighttpd.conf
|
${SUDO} mv /etc/lighttpd/lighttpd.conf.orig /etc/lighttpd/lighttpd.conf
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -f /etc/lighttpd/external.conf ]]; then
|
|
||||||
${SUDO} rm /etc/lighttpd/external.conf
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e " ${TICK} Removed lighttpd configs"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
${SUDO} rm -f /etc/dnsmasq.d/adList.conf &> /dev/null
|
${SUDO} rm -f /etc/dnsmasq.d/adList.conf &> /dev/null
|
||||||
@@ -156,7 +153,7 @@ removeNoPurge() {
|
|||||||
|
|
||||||
# Restore Resolved
|
# Restore Resolved
|
||||||
if [[ -e /etc/systemd/resolved.conf.orig ]]; then
|
if [[ -e /etc/systemd/resolved.conf.orig ]]; then
|
||||||
${SUDO} cp -p /etc/systemd/resolved.conf.orig /etc/systemd/resolved.conf
|
${SUDO} cp /etc/systemd/resolved.conf.orig /etc/systemd/resolved.conf
|
||||||
systemctl reload-or-restart systemd-resolved
|
systemctl reload-or-restart systemd-resolved
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -188,17 +185,9 @@ removeNoPurge() {
|
|||||||
echo -e " ${CROSS} Unable to remove 'pihole' user"
|
echo -e " ${CROSS} Unable to remove 'pihole' user"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
# If the pihole group exists, then remove
|
|
||||||
if getent group "pihole" &> /dev/null; then
|
|
||||||
if ${SUDO} groupdel pihole 2> /dev/null; then
|
|
||||||
echo -e " ${TICK} Removed 'pihole' group"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} Unable to remove 'pihole' group"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "\\n We're sorry to see you go, but thanks for checking out Pi-hole!
|
echo -e "\\n We're sorry to see you go, but thanks for checking out Pi-hole!
|
||||||
If you need help, reach out to us on GitHub, Discourse, Reddit or Twitter
|
If you need help, reach out to us on Github, Discourse, Reddit or Twitter
|
||||||
Reinstall at any time: ${COL_WHITE}curl -sSL https://install.pi-hole.net | bash${COL_NC}
|
Reinstall at any time: ${COL_WHITE}curl -sSL https://install.pi-hole.net | bash${COL_NC}
|
||||||
|
|
||||||
${COL_LIGHT_RED}Please reset the DNS on your router/clients to restore internet connectivity
|
${COL_LIGHT_RED}Please reset the DNS on your router/clients to restore internet connectivity
|
||||||
|
43
block hulu ads/lighttpd.conf
Normal file
43
block hulu ads/lighttpd.conf
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Pi-hole: A black hole for Internet advertisements
|
||||||
|
# (c) 2015, 2016 by Jacob Salmela
|
||||||
|
# Network-wide ad blocking via your Raspberry Pi
|
||||||
|
# http://pi-hole.net
|
||||||
|
# Lighttpd config file for Pi-hole
|
||||||
|
#
|
||||||
|
# Pi-hole is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
server.modules = (
|
||||||
|
"mod_access",
|
||||||
|
"mod_alias",
|
||||||
|
"mod_compress",
|
||||||
|
"mod_redirect",
|
||||||
|
"mod_rewrite"
|
||||||
|
)
|
||||||
|
|
||||||
|
server.document-root = "/var/www"
|
||||||
|
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
|
||||||
|
server.errorlog = "/var/log/lighttpd/error.log"
|
||||||
|
server.pid-file = "/var/run/lighttpd.pid"
|
||||||
|
server.username = "www-data"
|
||||||
|
server.groupname = "www-data"
|
||||||
|
server.port = 80
|
||||||
|
|
||||||
|
|
||||||
|
index-file.names = ( "index.php", "index.html", "index.lighttpd.html" )
|
||||||
|
url.access-deny = ( "~", ".inc" )
|
||||||
|
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )
|
||||||
|
|
||||||
|
compress.cache-dir = "/var/cache/lighttpd/compress/"
|
||||||
|
compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" )
|
||||||
|
|
||||||
|
# default listening port for IPv6 falls back to the IPv4 port
|
||||||
|
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
|
||||||
|
include_shell "/usr/share/lighttpd/create-mime.assign.pl"
|
||||||
|
include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
|
||||||
|
|
||||||
|
$HTTP["host"] =~ "ads.hulu.com|ads-v-darwin.hulu.com|ads-e-darwin.hulu.com" {
|
||||||
|
url.redirect = ( ".*" => "http://192.168.1.101:8200/MediaItems/19.mov")
|
||||||
|
}
|
17
block hulu ads/minidlna.conf
Normal file
17
block hulu ads/minidlna.conf
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
# Pi-hole: A black hole for Internet advertisements
|
||||||
|
# (c) 2015, 2016 by Jacob Salmela
|
||||||
|
# Network-wide ad blocking via your Raspberry Pi
|
||||||
|
# http://pi-hole.net
|
||||||
|
# MiniDLNA config file for Pi-hole
|
||||||
|
#
|
||||||
|
# Pi-hole is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 2 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
media_dir=V,/var/lib/minidlna/videos/
|
||||||
|
port=8200
|
||||||
|
friendly_name=pihole
|
||||||
|
serial=12345678
|
||||||
|
model_number=1
|
||||||
|
inotify=yes
|
683
gravity.sh
683
gravity.sh
@@ -17,31 +17,36 @@ coltable="/opt/pihole/COL_TABLE"
|
|||||||
source "${coltable}"
|
source "${coltable}"
|
||||||
regexconverter="/opt/pihole/wildcard_regex_converter.sh"
|
regexconverter="/opt/pihole/wildcard_regex_converter.sh"
|
||||||
source "${regexconverter}"
|
source "${regexconverter}"
|
||||||
# shellcheck disable=SC1091
|
|
||||||
source "/etc/.pihole/advanced/Scripts/database_migration/gravity-db.sh"
|
|
||||||
|
|
||||||
basename="pihole"
|
basename="pihole"
|
||||||
PIHOLE_COMMAND="/usr/local/bin/${basename}"
|
PIHOLE_COMMAND="/usr/local/bin/${basename}"
|
||||||
|
|
||||||
piholeDir="/etc/${basename}"
|
piholeDir="/etc/${basename}"
|
||||||
|
|
||||||
# Legacy (pre v5.0) list file locations
|
adListFile="${piholeDir}/adlists.list"
|
||||||
|
adListDefault="${piholeDir}/adlists.default"
|
||||||
|
|
||||||
whitelistFile="${piholeDir}/whitelist.txt"
|
whitelistFile="${piholeDir}/whitelist.txt"
|
||||||
blacklistFile="${piholeDir}/blacklist.txt"
|
blacklistFile="${piholeDir}/blacklist.txt"
|
||||||
regexFile="${piholeDir}/regex.list"
|
regexFile="${piholeDir}/regex.list"
|
||||||
adListFile="${piholeDir}/adlists.list"
|
|
||||||
|
|
||||||
|
adList="${piholeDir}/gravity.list"
|
||||||
|
blackList="${piholeDir}/black.list"
|
||||||
localList="${piholeDir}/local.list"
|
localList="${piholeDir}/local.list"
|
||||||
VPNList="/etc/openvpn/ipp.txt"
|
VPNList="/etc/openvpn/ipp.txt"
|
||||||
|
|
||||||
piholeGitDir="/etc/.pihole"
|
|
||||||
gravityDBfile="${piholeDir}/gravity.db"
|
|
||||||
gravityTEMPfile="${piholeDir}/gravity_temp.db"
|
|
||||||
gravityDBschema="${piholeGitDir}/advanced/Templates/gravity.db.sql"
|
|
||||||
gravityDBcopy="${piholeGitDir}/advanced/Templates/gravity_copy.sql"
|
|
||||||
optimize_database=false
|
|
||||||
|
|
||||||
domainsExtension="domains"
|
domainsExtension="domains"
|
||||||
|
matterAndLight="${basename}.0.matterandlight.txt"
|
||||||
|
parsedMatter="${basename}.1.parsedmatter.txt"
|
||||||
|
whitelistMatter="${basename}.2.whitelistmatter.txt"
|
||||||
|
accretionDisc="${basename}.3.accretionDisc.txt"
|
||||||
|
preEventHorizon="list.preEventHorizon"
|
||||||
|
|
||||||
|
skipDownload="false"
|
||||||
|
|
||||||
|
resolver="pihole-FTL"
|
||||||
|
|
||||||
|
haveSourceUrls=true
|
||||||
|
|
||||||
# Source setupVars from install script
|
# Source setupVars from install script
|
||||||
setupVars="${piholeDir}/setupVars.conf"
|
setupVars="${piholeDir}/setupVars.conf"
|
||||||
@@ -78,186 +83,31 @@ if [[ -r "${piholeDir}/pihole.conf" ]]; then
|
|||||||
echo -e " ${COL_LIGHT_RED}Ignoring overrides specified within pihole.conf! ${COL_NC}"
|
echo -e " ${COL_LIGHT_RED}Ignoring overrides specified within pihole.conf! ${COL_NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Generate new sqlite3 file from schema template
|
# Determine if Pi-hole blocking is disabled
|
||||||
generate_gravity_database() {
|
# If this is the case, we want to update
|
||||||
sqlite3 "${1}" < "${gravityDBschema}"
|
# gravity.list.bck and black.list.bck instead of
|
||||||
}
|
# gravity.list and black.list
|
||||||
|
detect_pihole_blocking_status() {
|
||||||
# Copy data from old to new database file and swap them
|
if [[ "${BLOCKING_ENABLED}" == false ]]; then
|
||||||
gravity_swap_databases() {
|
echo -e " ${INFO} Pi-hole blocking is disabled"
|
||||||
local str
|
adList="${adList}.bck"
|
||||||
str="Building tree"
|
blackList="${blackList}.bck"
|
||||||
echo -ne " ${INFO} ${str}..."
|
|
||||||
|
|
||||||
# The index is intentionally not UNIQUE as prro quality adlists may contain domains more than once
|
|
||||||
output=$( { sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to build gravity tree in ${gravityTEMPfile}\\n ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
|
|
||||||
str="Swapping databases"
|
|
||||||
echo -ne " ${INFO} ${str}..."
|
|
||||||
|
|
||||||
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBcopy}"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
|
|
||||||
# Swap databases and remove old database
|
|
||||||
rm "${gravityDBfile}"
|
|
||||||
mv "${gravityTEMPfile}" "${gravityDBfile}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Update timestamp when the gravity table was last updated successfully
|
|
||||||
update_gravity_timestamp() {
|
|
||||||
output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | sqlite3 "${gravityDBfile}"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to update gravity timestamp in database ${gravityDBfile}\\n ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Import domains from file and store them in the specified database table
|
|
||||||
database_table_from_file() {
|
|
||||||
# Define locals
|
|
||||||
local table source backup_path backup_file tmpFile type
|
|
||||||
table="${1}"
|
|
||||||
source="${2}"
|
|
||||||
backup_path="${piholeDir}/migration_backup"
|
|
||||||
backup_file="${backup_path}/$(basename "${2}")"
|
|
||||||
tmpFile="$(mktemp -p "/tmp" --suffix=".gravity")"
|
|
||||||
|
|
||||||
local timestamp
|
|
||||||
timestamp="$(date --utc +'%s')"
|
|
||||||
|
|
||||||
local rowid
|
|
||||||
declare -i rowid
|
|
||||||
rowid=1
|
|
||||||
|
|
||||||
# Special handling for domains to be imported into the common domainlist table
|
|
||||||
if [[ "${table}" == "whitelist" ]]; then
|
|
||||||
type="0"
|
|
||||||
table="domainlist"
|
|
||||||
elif [[ "${table}" == "blacklist" ]]; then
|
|
||||||
type="1"
|
|
||||||
table="domainlist"
|
|
||||||
elif [[ "${table}" == "regex" ]]; then
|
|
||||||
type="3"
|
|
||||||
table="domainlist"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get MAX(id) from domainlist when INSERTing into this table
|
|
||||||
if [[ "${table}" == "domainlist" ]]; then
|
|
||||||
rowid="$(sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")"
|
|
||||||
if [[ -z "$rowid" ]]; then
|
|
||||||
rowid=0
|
|
||||||
fi
|
|
||||||
rowid+=1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Loop over all domains in ${source} file
|
|
||||||
# Read file line by line
|
|
||||||
grep -v '^ *#' < "${source}" | while IFS= read -r domain
|
|
||||||
do
|
|
||||||
# Only add non-empty lines
|
|
||||||
if [[ -n "${domain}" ]]; then
|
|
||||||
if [[ "${table}" == "domain_audit" ]]; then
|
|
||||||
# domain_audit table format (no enable or modified fields)
|
|
||||||
echo "${rowid},\"${domain}\",${timestamp}" >> "${tmpFile}"
|
|
||||||
elif [[ "${table}" == "adlist" ]]; then
|
|
||||||
# Adlist table format
|
|
||||||
echo "${rowid},\"${domain}\",1,${timestamp},${timestamp},\"Migrated from ${source}\"" >> "${tmpFile}"
|
|
||||||
else
|
else
|
||||||
# White-, black-, and regexlist table format
|
echo -e " ${INFO} Pi-hole blocking is enabled"
|
||||||
echo "${rowid},${type},\"${domain}\",1,${timestamp},${timestamp},\"Migrated from ${source}\"" >> "${tmpFile}"
|
|
||||||
fi
|
fi
|
||||||
rowid+=1
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Store domains in database table specified by ${table}
|
|
||||||
# Use printf as .mode and .import need to be on separate lines
|
|
||||||
# see https://unix.stackexchange.com/a/445615/83260
|
|
||||||
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to fill table ${table}${type} in database ${gravityDBfile}\\n ${output}"
|
|
||||||
gravity_Cleanup "error"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Move source file to backup directory, create directory if not existing
|
|
||||||
mkdir -p "${backup_path}"
|
|
||||||
mv "${source}" "${backup_file}" 2> /dev/null || \
|
|
||||||
echo -e " ${CROSS} Unable to backup ${source} to ${backup_path}"
|
|
||||||
|
|
||||||
# Delete tmpFile
|
|
||||||
rm "${tmpFile}" > /dev/null 2>&1 || \
|
|
||||||
echo -e " ${CROSS} Unable to remove ${tmpFile}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Migrate pre-v5.0 list files to database-based Pi-hole versions
|
|
||||||
migrate_to_database() {
|
|
||||||
# Create database file only if not present
|
|
||||||
if [ ! -e "${gravityDBfile}" ]; then
|
|
||||||
# Create new database file - note that this will be created in version 1
|
|
||||||
echo -e " ${INFO} Creating new gravity database"
|
|
||||||
generate_gravity_database "${gravityDBfile}"
|
|
||||||
|
|
||||||
# Check if gravity database needs to be updated
|
|
||||||
upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
|
|
||||||
|
|
||||||
# Migrate list files to new database
|
|
||||||
if [ -e "${adListFile}" ]; then
|
|
||||||
# Store adlist domains in database
|
|
||||||
echo -e " ${INFO} Migrating content of ${adListFile} into new database"
|
|
||||||
database_table_from_file "adlist" "${adListFile}"
|
|
||||||
fi
|
|
||||||
if [ -e "${blacklistFile}" ]; then
|
|
||||||
# Store blacklisted domains in database
|
|
||||||
echo -e " ${INFO} Migrating content of ${blacklistFile} into new database"
|
|
||||||
database_table_from_file "blacklist" "${blacklistFile}"
|
|
||||||
fi
|
|
||||||
if [ -e "${whitelistFile}" ]; then
|
|
||||||
# Store whitelisted domains in database
|
|
||||||
echo -e " ${INFO} Migrating content of ${whitelistFile} into new database"
|
|
||||||
database_table_from_file "whitelist" "${whitelistFile}"
|
|
||||||
fi
|
|
||||||
if [ -e "${regexFile}" ]; then
|
|
||||||
# Store regex domains in database
|
|
||||||
# Important note: We need to add the domains to the "regex" table
|
|
||||||
# as it will only later be renamed to "regex_blacklist"!
|
|
||||||
echo -e " ${INFO} Migrating content of ${regexFile} into new database"
|
|
||||||
database_table_from_file "regex" "${regexFile}"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if gravity database needs to be updated
|
|
||||||
upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Determine if DNS resolution is available before proceeding
|
# Determine if DNS resolution is available before proceeding
|
||||||
gravity_CheckDNSResolutionAvailable() {
|
gravity_CheckDNSResolutionAvailable() {
|
||||||
local lookupDomain="pi.hole"
|
local lookupDomain="pi.hole"
|
||||||
|
|
||||||
# Determine if $localList does not exist, and ensure it is not empty
|
# Determine if $localList does not exist
|
||||||
if [[ ! -e "${localList}" ]] || [[ -s "${localList}" ]]; then
|
if [[ ! -e "${localList}" ]]; then
|
||||||
lookupDomain="raw.githubusercontent.com"
|
lookupDomain="raw.githubusercontent.com"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Determine if $lookupDomain is resolvable
|
# Determine if $lookupDomain is resolvable
|
||||||
if timeout 4 getent hosts "${lookupDomain}" &> /dev/null; then
|
if timeout 1 getent hosts "${lookupDomain}" &> /dev/null; then
|
||||||
# Print confirmation of resolvability if it had previously failed
|
# Print confirmation of resolvability if it had previously failed
|
||||||
if [[ -n "${secs:-}" ]]; then
|
if [[ -n "${secs:-}" ]]; then
|
||||||
echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
|
echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
|
||||||
@@ -269,9 +119,9 @@ gravity_CheckDNSResolutionAvailable() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# If the /etc/resolv.conf contains resolvers other than 127.0.0.1 then the local dnsmasq will not be queried and pi.hole is NXDOMAIN.
|
# If the /etc/resolv.conf contains resolvers other than 127.0.0.1 then the local dnsmasq will not be queried and pi.hole is NXDOMAIN.
|
||||||
# This means that even though name resolution is working, the getent hosts check fails and the holddown timer keeps ticking and eventually fails
|
# This means that even though name resolution is working, the getent hosts check fails and the holddown timer keeps ticking and eventualy fails
|
||||||
# So we check the output of the last command and if it failed, attempt to use dig +short as a fallback
|
# So we check the output of the last command and if it failed, attempt to use dig +short as a fallback
|
||||||
if timeout 4 dig +short "${lookupDomain}" &> /dev/null; then
|
if timeout 1 dig +short "${lookupDomain}" &> /dev/null; then
|
||||||
if [[ -n "${secs:-}" ]]; then
|
if [[ -n "${secs:-}" ]]; then
|
||||||
echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
|
echo -e "${OVER} ${TICK} DNS resolution is now available\\n"
|
||||||
fi
|
fi
|
||||||
@@ -282,7 +132,7 @@ gravity_CheckDNSResolutionAvailable() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Determine error output message
|
# Determine error output message
|
||||||
if pgrep pihole-FTL &> /dev/null; then
|
if pidof ${resolver} &> /dev/null; then
|
||||||
echo -e " ${CROSS} DNS resolution is currently unavailable"
|
echo -e " ${CROSS} DNS resolution is currently unavailable"
|
||||||
else
|
else
|
||||||
echo -e " ${CROSS} DNS service is not running"
|
echo -e " ${CROSS} DNS service is not running"
|
||||||
@@ -303,14 +153,19 @@ gravity_CheckDNSResolutionAvailable() {
|
|||||||
gravity_CheckDNSResolutionAvailable
|
gravity_CheckDNSResolutionAvailable
|
||||||
}
|
}
|
||||||
|
|
||||||
# Retrieve blocklist URLs and parse domains from adlist.list
|
# Retrieve blocklist URLs and parse domains from adlists.list
|
||||||
gravity_DownloadBlocklists() {
|
gravity_GetBlocklistUrls() {
|
||||||
echo -e " ${INFO} ${COL_BOLD}Neutrino emissions detected${COL_NC}..."
|
echo -e " ${INFO} ${COL_BOLD}Neutrino emissions detected${COL_NC}..."
|
||||||
|
|
||||||
# Retrieve source URLs from gravity database
|
if [[ -f "${adListDefault}" ]] && [[ -f "${adListFile}" ]]; then
|
||||||
# We source only enabled adlists, sqlite3 stores boolean values as 0 (false) or 1 (true)
|
# Remove superceded $adListDefault file
|
||||||
mapfile -t sources <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
|
rm "${adListDefault}" 2> /dev/null || \
|
||||||
mapfile -t sourceIDs <<< "$(sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)"
|
echo -e " ${CROSS} Unable to remove ${adListDefault}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Retrieve source URLs from $adListFile
|
||||||
|
# Logic: Remove comments and empty lines
|
||||||
|
mapfile -t sources <<< "$(grep -v -E "^(#|$)" "${adListFile}" 2> /dev/null)"
|
||||||
|
|
||||||
# Parse source domains from $sources
|
# Parse source domains from $sources
|
||||||
mapfile -t sourceDomains <<< "$(
|
mapfile -t sourceDomains <<< "$(
|
||||||
@@ -331,46 +186,23 @@ gravity_DownloadBlocklists() {
|
|||||||
echo -e "${OVER} ${CROSS} ${str}"
|
echo -e "${OVER} ${CROSS} ${str}"
|
||||||
echo -e " ${INFO} No source list found, or it is empty"
|
echo -e " ${INFO} No source list found, or it is empty"
|
||||||
echo ""
|
echo ""
|
||||||
return 1
|
haveSourceUrls=false
|
||||||
fi
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Define options for when retrieving blocklists
|
||||||
|
gravity_SetDownloadOptions() {
|
||||||
|
local url domain agent cmd_ext str
|
||||||
|
|
||||||
local url domain agent cmd_ext str target compression
|
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Prepare new gravity database
|
|
||||||
str="Preparing new gravity database"
|
|
||||||
echo -ne " ${INFO} ${str}..."
|
|
||||||
rm "${gravityTEMPfile}" > /dev/null 2>&1
|
|
||||||
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to create new database ${gravityTEMPfile}\\n ${output}"
|
|
||||||
gravity_Cleanup "error"
|
|
||||||
else
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
target="$(mktemp -p "/tmp" --suffix=".gravity")"
|
|
||||||
|
|
||||||
# Use compression to reduce the amount of data that is transfered
|
|
||||||
# between the Pi-hole and the ad list provider. Use this feature
|
|
||||||
# only if it is supported by the locally available version of curl
|
|
||||||
if curl -V | grep -q "Features:.* libz"; then
|
|
||||||
compression="--compressed"
|
|
||||||
echo -e " ${INFO} Using libz compression\n"
|
|
||||||
else
|
|
||||||
compression=""
|
|
||||||
echo -e " ${INFO} Libz compression not available\n"
|
|
||||||
fi
|
|
||||||
# Loop through $sources and download each one
|
# Loop through $sources and download each one
|
||||||
for ((i = 0; i < "${#sources[@]}"; i++)); do
|
for ((i = 0; i < "${#sources[@]}"; i++)); do
|
||||||
url="${sources[$i]}"
|
url="${sources[$i]}"
|
||||||
domain="${sourceDomains[$i]}"
|
domain="${sourceDomains[$i]}"
|
||||||
id="${sourceIDs[$i]}"
|
|
||||||
|
|
||||||
# Save the file as list.#.domain
|
# Save the file as list.#.domain
|
||||||
saveLocation="${piholeDir}/list.${id}.${domain}.${domainsExtension}"
|
saveLocation="${piholeDir}/list.${i}.${domain}.${domainsExtension}"
|
||||||
activeDomains[$i]="${saveLocation}"
|
activeDomains[$i]="${saveLocation}"
|
||||||
|
|
||||||
# Default user-agent (for Cloudflare's Browser Integrity Check: https://support.cloudflare.com/hc/en-us/articles/200170086-What-does-the-Browser-Integrity-Check-do-)
|
# Default user-agent (for Cloudflare's Browser Integrity Check: https://support.cloudflare.com/hc/en-us/articles/200170086-What-does-the-Browser-Integrity-Check-do-)
|
||||||
@@ -382,90 +214,18 @@ gravity_DownloadBlocklists() {
|
|||||||
*) cmd_ext="";;
|
*) cmd_ext="";;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
echo -e " ${INFO} Target: ${url}"
|
if [[ "${skipDownload}" == false ]]; then
|
||||||
local regex
|
echo -e " ${INFO} Target: ${domain} (${url##*/})"
|
||||||
# Check for characters NOT allowed in URLs
|
gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}"
|
||||||
regex="[^a-zA-Z0-9:/?&%=~._()-;]"
|
|
||||||
if [[ "${url}" =~ ${regex} ]]; then
|
|
||||||
echo -e " ${CROSS} Invalid Target"
|
|
||||||
else
|
|
||||||
gravity_DownloadBlocklistFromUrl "${url}" "${cmd_ext}" "${agent}" "${sourceIDs[$i]}" "${saveLocation}" "${target}" "${compression}"
|
|
||||||
fi
|
|
||||||
echo ""
|
echo ""
|
||||||
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
str="Storing downloaded domains in new gravity database"
|
|
||||||
echo -ne " ${INFO} ${str}..."
|
|
||||||
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | sqlite3 "${gravityTEMPfile}"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to fill gravity table in database ${gravityTEMPfile}\\n ${output}"
|
|
||||||
gravity_Cleanup "error"
|
|
||||||
else
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${status}" -eq 0 && -n "${output}" ]]; then
|
|
||||||
echo -e " Encountered non-critical SQL warnings. Please check the suitability of the lists you're using!\\n\\n SQL warnings:"
|
|
||||||
local warning file line lineno
|
|
||||||
while IFS= read -r line; do
|
|
||||||
echo " - ${line}"
|
|
||||||
warning="$(grep -oh "^[^:]*:[0-9]*" <<< "${line}")"
|
|
||||||
file="${warning%:*}"
|
|
||||||
lineno="${warning#*:}"
|
|
||||||
if [[ -n "${file}" && -n "${lineno}" ]]; then
|
|
||||||
echo -n " Line contains: "
|
|
||||||
awk "NR==${lineno}" < "${file}"
|
|
||||||
fi
|
|
||||||
done <<< "${output}"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
rm "${target}" > /dev/null 2>&1 || \
|
|
||||||
echo -e " ${CROSS} Unable to remove ${target}"
|
|
||||||
|
|
||||||
gravity_Blackbody=true
|
gravity_Blackbody=true
|
||||||
}
|
}
|
||||||
|
|
||||||
total_num=0
|
|
||||||
parseList() {
|
|
||||||
local adlistID="${1}" src="${2}" target="${3}" incorrect_lines
|
|
||||||
# This sed does the following things:
|
|
||||||
# 1. Remove all domains containing invalid characters. Valid are: a-z, A-Z, 0-9, dot (.), minus (-), underscore (_)
|
|
||||||
# 2. Append ,adlistID to every line
|
|
||||||
# 3. Ensures there is a newline on the last line
|
|
||||||
sed -e "/[^a-zA-Z0-9.\_-]/d;s/$/,${adlistID}/;/.$/a\\" "${src}" >> "${target}"
|
|
||||||
# Find (up to) five domains containing invalid characters (see above)
|
|
||||||
incorrect_lines="$(sed -e "/[^a-zA-Z0-9.\_-]/!d" "${src}" | head -n 5)"
|
|
||||||
|
|
||||||
local num_lines num_target_lines num_correct_lines num_invalid
|
|
||||||
# Get number of lines in source file
|
|
||||||
num_lines="$(grep -c "^" "${src}")"
|
|
||||||
# Get number of lines in destination file
|
|
||||||
num_target_lines="$(grep -c "^" "${target}")"
|
|
||||||
num_correct_lines="$(( num_target_lines-total_num ))"
|
|
||||||
total_num="$num_target_lines"
|
|
||||||
num_invalid="$(( num_lines-num_correct_lines ))"
|
|
||||||
if [[ "${num_invalid}" -eq 0 ]]; then
|
|
||||||
echo " ${INFO} Received ${num_lines} domains"
|
|
||||||
else
|
|
||||||
echo " ${INFO} Received ${num_lines} domains, ${num_invalid} domains invalid!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Display sample of invalid lines if we found some
|
|
||||||
if [[ -n "${incorrect_lines}" ]]; then
|
|
||||||
echo " Sample of invalid domains:"
|
|
||||||
while IFS= read -r line; do
|
|
||||||
echo " - ${line}"
|
|
||||||
done <<< "${incorrect_lines}"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Download specified URL and perform checks on HTTP status and file content
|
# Download specified URL and perform checks on HTTP status and file content
|
||||||
gravity_DownloadBlocklistFromUrl() {
|
gravity_DownloadBlocklistFromUrl() {
|
||||||
local url="${1}" cmd_ext="${2}" agent="${3}" adlistID="${4}" saveLocation="${5}" target="${6}" compression="${7}"
|
local url="${1}" cmd_ext="${2}" agent="${3}" heisenbergCompensator="" patternBuffer str httpCode success=""
|
||||||
local heisenbergCompensator="" patternBuffer str httpCode success=""
|
|
||||||
|
|
||||||
# Create temp file to store content on disk instead of RAM
|
# Create temp file to store content on disk instead of RAM
|
||||||
patternBuffer=$(mktemp -p "/tmp" --suffix=".phgpb")
|
patternBuffer=$(mktemp -p "/tmp" --suffix=".phgpb")
|
||||||
@@ -503,7 +263,7 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
else
|
else
|
||||||
printf -v port "%s" "${PIHOLE_DNS_1#*#}"
|
printf -v port "%s" "${PIHOLE_DNS_1#*#}"
|
||||||
fi
|
fi
|
||||||
ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}" | tail -1)
|
ip=$(dig "@${ip_addr}" -p "${port}" +short "${domain}")
|
||||||
if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then
|
if [[ $(echo "${url}" | awk -F '://' '{print $1}') = "https" ]]; then
|
||||||
port=443;
|
port=443;
|
||||||
else port=80
|
else port=80
|
||||||
@@ -513,9 +273,8 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
echo -ne " ${INFO} ${str} Pending..."
|
echo -ne " ${INFO} ${str} Pending..."
|
||||||
cmd_ext="--resolve $domain:$port:$ip $cmd_ext"
|
cmd_ext="--resolve $domain:$port:$ip $cmd_ext"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# shellcheck disable=SC2086
|
# shellcheck disable=SC2086
|
||||||
httpCode=$(curl -s -L ${compression} ${cmd_ext} ${heisenbergCompensator} -w "%{http_code}" -A "${agent}" "${url}" -o "${patternBuffer}" 2> /dev/null)
|
httpCode=$(curl -s -L ${cmd_ext} ${heisenbergCompensator} -w "%{http_code}" -A "${agent}" "${url}" -o "${patternBuffer}" 2> /dev/null)
|
||||||
|
|
||||||
case $url in
|
case $url in
|
||||||
# Did we "download" a local file?
|
# Did we "download" a local file?
|
||||||
@@ -547,14 +306,11 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
# Determine if the blocklist was downloaded and saved correctly
|
# Determine if the blocklist was downloaded and saved correctly
|
||||||
if [[ "${success}" == true ]]; then
|
if [[ "${success}" == true ]]; then
|
||||||
if [[ "${httpCode}" == "304" ]]; then
|
if [[ "${httpCode}" == "304" ]]; then
|
||||||
# Add domains to database table file
|
: # Do not attempt to re-parse file
|
||||||
parseList "${adlistID}" "${saveLocation}" "${target}"
|
|
||||||
# Check if $patternbuffer is a non-zero length file
|
# Check if $patternbuffer is a non-zero length file
|
||||||
elif [[ -s "${patternBuffer}" ]]; then
|
elif [[ -s "${patternBuffer}" ]]; then
|
||||||
# Determine if blocklist is non-standard and parse as appropriate
|
# Determine if blocklist is non-standard and parse as appropriate
|
||||||
gravity_ParseFileIntoDomains "${patternBuffer}" "${saveLocation}"
|
gravity_ParseFileIntoDomains "${patternBuffer}" "${saveLocation}"
|
||||||
# Add domains to database table file
|
|
||||||
parseList "${adlistID}" "${saveLocation}" "${target}"
|
|
||||||
else
|
else
|
||||||
# Fall back to previously cached list if $patternBuffer is empty
|
# Fall back to previously cached list if $patternBuffer is empty
|
||||||
echo -e " ${INFO} Received empty file: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
|
echo -e " ${INFO} Received empty file: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
|
||||||
@@ -563,8 +319,6 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
# Determine if cached list has read permission
|
# Determine if cached list has read permission
|
||||||
if [[ -r "${saveLocation}" ]]; then
|
if [[ -r "${saveLocation}" ]]; then
|
||||||
echo -e " ${CROSS} List download failed: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
|
echo -e " ${CROSS} List download failed: ${COL_LIGHT_GREEN}using previously cached list${COL_NC}"
|
||||||
# Add domains to database table file
|
|
||||||
parseList "${adlistID}" "${saveLocation}" "${target}"
|
|
||||||
else
|
else
|
||||||
echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}"
|
echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}"
|
||||||
fi
|
fi
|
||||||
@@ -573,29 +327,24 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
|
|
||||||
# Parse source files into domains format
|
# Parse source files into domains format
|
||||||
gravity_ParseFileIntoDomains() {
|
gravity_ParseFileIntoDomains() {
|
||||||
local source="${1}" destination="${2}" firstLine
|
local source="${1}" destination="${2}" firstLine abpFilter
|
||||||
|
|
||||||
# Determine if we are parsing a consolidated list
|
# Determine if we are parsing a consolidated list
|
||||||
#if [[ "${source}" == "${piholeDir}/${matterAndLight}" ]]; then
|
if [[ "${source}" == "${piholeDir}/${matterAndLight}" ]]; then
|
||||||
# Remove comments and print only the domain name
|
# Remove comments and print only the domain name
|
||||||
# Most of the lists downloaded are already in hosts file format but the spacing/formating is not contiguous
|
# Most of the lists downloaded are already in hosts file format but the spacing/formating is not contigious
|
||||||
# This helps with that and makes it easier to read
|
# This helps with that and makes it easier to read
|
||||||
# It also helps with debugging so each stage of the script can be researched more in depth
|
# It also helps with debugging so each stage of the script can be researched more in depth
|
||||||
# 1) Remove carriage returns
|
# Awk -F splits on given IFS, we grab the right hand side (chops trailing #coments and /'s to grab the domain only.
|
||||||
# 2) Convert all characters to lowercase
|
# Last awk command takes non-commented lines and if they have 2 fields, take the right field (the domain) and leave
|
||||||
# 3) Remove comments (text starting with "#", include possible spaces before the hash sign)
|
# the left (IP address), otherwise grab the single field.
|
||||||
# 4) Remove lines containing "/"
|
|
||||||
# 5) Remove leading tabs, spaces, etc.
|
< ${source} awk -F '#' '{print $1}' | \
|
||||||
# 6) Delete lines not matching domain names
|
awk -F '/' '{print $1}' | \
|
||||||
< "${source}" tr -d '\r' | \
|
awk '($1 !~ /^#/) { if (NF>1) {print $2} else {print $1}}' | \
|
||||||
tr '[:upper:]' '[:lower:]' | \
|
sed -nr -e 's/\.{2,}/./g' -e '/\./p' > ${destination}
|
||||||
sed 's/\s*#.*//g' | \
|
|
||||||
sed -r '/(\/).*$/d' | \
|
|
||||||
sed -r 's/^.*\s+//g' | \
|
|
||||||
sed -r '/([^\.]+\.)+[^\.]{2,}/!d' > "${destination}"
|
|
||||||
chmod 644 "${destination}"
|
|
||||||
return 0
|
return 0
|
||||||
#fi
|
fi
|
||||||
|
|
||||||
# Individual file parsing: Keep comments, while parsing domains from each line
|
# Individual file parsing: Keep comments, while parsing domains from each line
|
||||||
# We keep comments to respect the list maintainer's licensing
|
# We keep comments to respect the list maintainer's licensing
|
||||||
@@ -604,7 +353,46 @@ gravity_ParseFileIntoDomains() {
|
|||||||
# Determine how to parse individual source file formats
|
# Determine how to parse individual source file formats
|
||||||
if [[ "${firstLine,,}" =~ (adblock|ublock|^!) ]]; then
|
if [[ "${firstLine,,}" =~ (adblock|ublock|^!) ]]; then
|
||||||
# Compare $firstLine against lower case words found in Adblock lists
|
# Compare $firstLine against lower case words found in Adblock lists
|
||||||
echo -e " ${CROSS} Format: Adblock (list type not supported)"
|
echo -ne " ${INFO} Format: Adblock"
|
||||||
|
|
||||||
|
# Define symbols used as comments: [!
|
||||||
|
# "||.*^" includes the "Example 2" domains we can extract
|
||||||
|
# https://adblockplus.org/filter-cheatsheet
|
||||||
|
abpFilter="/^(\\[|!)|^(\\|\\|.*\\^)/"
|
||||||
|
|
||||||
|
# Parse Adblock lists by extracting "Example 2" domains
|
||||||
|
# Logic: Ignore lines which do not include comments or domain name anchor
|
||||||
|
awk ''"${abpFilter}"' {
|
||||||
|
# Remove valid adblock type options
|
||||||
|
gsub(/\$?~?(important|third-party|popup|subdocument|websocket),?/, "", $0)
|
||||||
|
# Remove starting domain name anchor "||" and ending seperator "^"
|
||||||
|
gsub(/^(\|\|)|(\^)/, "", $0)
|
||||||
|
# Remove invalid characters (*/,=$)
|
||||||
|
if($0 ~ /[*\/,=\$]/) { $0="" }
|
||||||
|
# Remove lines which are only IPv4 addresses
|
||||||
|
if($0 ~ /^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/) { $0="" }
|
||||||
|
if($0) { print $0 }
|
||||||
|
}' "${source}" > "${destination}"
|
||||||
|
|
||||||
|
# Determine if there are Adblock exception rules
|
||||||
|
# https://adblockplus.org/filters
|
||||||
|
if grep -q "^@@||" "${source}" &> /dev/null; then
|
||||||
|
# Parse Adblock lists by extracting exception rules
|
||||||
|
# Logic: Ignore lines which do not include exception format "@@||example.com^"
|
||||||
|
awk -F "[|^]" '/^@@\|\|.*\^/ {
|
||||||
|
# Remove valid adblock type options
|
||||||
|
gsub(/\$?~?(third-party)/, "", $0)
|
||||||
|
# Remove invalid characters (*/,=$)
|
||||||
|
if($0 ~ /[*\/,=\$]/) { $0="" }
|
||||||
|
if($3) { print $3 }
|
||||||
|
}' "${source}" > "${destination}.exceptionsFile.tmp"
|
||||||
|
|
||||||
|
# Remove exceptions
|
||||||
|
comm -23 "${destination}" <(sort "${destination}.exceptionsFile.tmp") > "${source}"
|
||||||
|
mv "${source}" "${destination}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e "${OVER} ${TICK} Format: Adblock"
|
||||||
elif grep -q "^address=/" "${source}" &> /dev/null; then
|
elif grep -q "^address=/" "${source}" &> /dev/null; then
|
||||||
# Parse Dnsmasq format lists
|
# Parse Dnsmasq format lists
|
||||||
echo -e " ${CROSS} Format: Dnsmasq (list type not supported)"
|
echo -e " ${CROSS} Format: Dnsmasq (list type not supported)"
|
||||||
@@ -625,13 +413,11 @@ gravity_ParseFileIntoDomains() {
|
|||||||
# Print if nonempty
|
# Print if nonempty
|
||||||
length { print }
|
length { print }
|
||||||
' "${source}" 2> /dev/null > "${destination}"
|
' "${source}" 2> /dev/null > "${destination}"
|
||||||
chmod 644 "${destination}"
|
|
||||||
|
|
||||||
echo -e "${OVER} ${TICK} Format: URL"
|
echo -e "${OVER} ${TICK} Format: URL"
|
||||||
else
|
else
|
||||||
# Default: Keep hosts/domains file in same format as it was downloaded
|
# Default: Keep hosts/domains file in same format as it was downloaded
|
||||||
output=$( { mv "${source}" "${destination}"; } 2>&1 )
|
output=$( { mv "${source}" "${destination}"; } 2>&1 )
|
||||||
chmod 644 "${destination}"
|
|
||||||
|
|
||||||
if [[ ! -e "${destination}" ]]; then
|
if [[ ! -e "${destination}" ]]; then
|
||||||
echo -e "\\n ${CROSS} Unable to move tmp file to ${piholeDir}
|
echo -e "\\n ${CROSS} Unable to move tmp file to ${piholeDir}
|
||||||
@@ -641,29 +427,103 @@ gravity_ParseFileIntoDomains() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Report number of entries in a table
|
# Create (unfiltered) "Matter and Light" consolidated list
|
||||||
gravity_Table_Count() {
|
gravity_ConsolidateDownloadedBlocklists() {
|
||||||
local table="${1}"
|
local str lastLine
|
||||||
local str="${2}"
|
|
||||||
local num
|
str="Consolidating blocklists"
|
||||||
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")"
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
if [[ "${table}" == "vw_gravity" ]]; then
|
echo -ne " ${INFO} ${str}..."
|
||||||
local unique
|
fi
|
||||||
unique="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")"
|
|
||||||
echo -e " ${INFO} Number of ${str}: ${num} (${COL_BOLD}${unique} unique domains${COL_NC})"
|
# Empty $matterAndLight if it already exists, otherwise, create it
|
||||||
sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});"
|
: > "${piholeDir}/${matterAndLight}"
|
||||||
else
|
|
||||||
echo -e " ${INFO} Number of ${str}: ${num}"
|
# Loop through each *.domains file
|
||||||
|
for i in "${activeDomains[@]}"; do
|
||||||
|
# Determine if file has read permissions, as download might have failed
|
||||||
|
if [[ -r "${i}" ]]; then
|
||||||
|
# Remove windows CRs from file, convert list to lower case, and append into $matterAndLight
|
||||||
|
tr -d '\r' < "${i}" | tr '[:upper:]' '[:lower:]' >> "${piholeDir}/${matterAndLight}"
|
||||||
|
|
||||||
|
# Ensure that the first line of a new list is on a new line
|
||||||
|
lastLine=$(tail -1 "${piholeDir}/${matterAndLight}")
|
||||||
|
if [[ "${#lastLine}" -gt 0 ]]; then
|
||||||
|
echo "" >> "${piholeDir}/${matterAndLight}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Parse consolidated list into (filtered, unique) domains-only format
|
||||||
|
gravity_SortAndFilterConsolidatedList() {
|
||||||
|
local str num
|
||||||
|
|
||||||
|
str="Extracting domains from blocklists"
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
echo -ne " ${INFO} ${str}..."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Parse into hosts file
|
||||||
|
gravity_ParseFileIntoDomains "${piholeDir}/${matterAndLight}" "${piholeDir}/${parsedMatter}"
|
||||||
|
|
||||||
|
# Format $parsedMatter line total as currency
|
||||||
|
num=$(printf "%'.0f" "$(wc -l < "${piholeDir}/${parsedMatter}")")
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
fi
|
||||||
|
echo -e " ${INFO} Number of domains being pulled in by gravity: ${COL_BLUE}${num}${COL_NC}"
|
||||||
|
|
||||||
|
str="Removing duplicate domains"
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
echo -ne " ${INFO} ${str}..."
|
||||||
|
fi
|
||||||
|
|
||||||
|
sort -u "${piholeDir}/${parsedMatter}" > "${piholeDir}/${preEventHorizon}"
|
||||||
|
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
# Format $preEventHorizon line total as currency
|
||||||
|
num=$(printf "%'.0f" "$(wc -l < "${piholeDir}/${preEventHorizon}")")
|
||||||
|
echo -e " ${INFO} Number of unique domains trapped in the Event Horizon: ${COL_BLUE}${num}${COL_NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Whitelist user-defined domains
|
||||||
|
gravity_Whitelist() {
|
||||||
|
local num str
|
||||||
|
|
||||||
|
if [[ ! -f "${whitelistFile}" ]]; then
|
||||||
|
echo -e " ${INFO} Nothing to whitelist!"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
num=$(wc -l < "${whitelistFile}")
|
||||||
|
str="Number of whitelisted domains: ${num}"
|
||||||
|
echo -ne " ${INFO} ${str}..."
|
||||||
|
|
||||||
|
# Print everything from preEventHorizon into whitelistMatter EXCEPT domains in $whitelistFile
|
||||||
|
comm -23 "${piholeDir}/${preEventHorizon}" <(sort "${whitelistFile}") > "${piholeDir}/${whitelistMatter}"
|
||||||
|
|
||||||
|
echo -e "${OVER} ${INFO} ${str}"
|
||||||
|
}
|
||||||
|
|
||||||
# Output count of blacklisted domains and regex filters
|
# Output count of blacklisted domains and regex filters
|
||||||
gravity_ShowCount() {
|
gravity_ShowBlockCount() {
|
||||||
gravity_Table_Count "vw_gravity" "gravity domains" ""
|
local num
|
||||||
gravity_Table_Count "vw_blacklist" "exact blacklisted domains"
|
|
||||||
gravity_Table_Count "vw_regex_blacklist" "regex blacklist filters"
|
if [[ -f "${blacklistFile}" ]]; then
|
||||||
gravity_Table_Count "vw_whitelist" "exact whitelisted domains"
|
num=$(printf "%'.0f" "$(wc -l < "${blacklistFile}")")
|
||||||
gravity_Table_Count "vw_regex_whitelist" "regex whitelist filters"
|
echo -e " ${INFO} Number of blacklisted domains: ${num}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "${regexFile}" ]]; then
|
||||||
|
num=$(grep -cv "^#" "${regexFile}")
|
||||||
|
echo -e " ${INFO} Number of regex filters: ${num}"
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Parse list of domains into hosts format
|
# Parse list of domains into hosts format
|
||||||
@@ -683,7 +543,7 @@ gravity_ParseDomainsIntoHosts() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# Create "localhost" entries into hosts format
|
# Create "localhost" entries into hosts format
|
||||||
gravity_generateLocalList() {
|
gravity_ParseLocalDomains() {
|
||||||
local hostname
|
local hostname
|
||||||
|
|
||||||
if [[ -s "/etc/hostname" ]]; then
|
if [[ -s "/etc/hostname" ]]; then
|
||||||
@@ -699,7 +559,6 @@ gravity_generateLocalList() {
|
|||||||
|
|
||||||
# Empty $localList if it already exists, otherwise, create it
|
# Empty $localList if it already exists, otherwise, create it
|
||||||
: > "${localList}"
|
: > "${localList}"
|
||||||
chmod 644 "${localList}"
|
|
||||||
|
|
||||||
gravity_ParseDomainsIntoHosts "${localList}.tmp" "${localList}"
|
gravity_ParseDomainsIntoHosts "${localList}.tmp" "${localList}"
|
||||||
|
|
||||||
@@ -709,6 +568,40 @@ gravity_generateLocalList() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Create primary blacklist entries
|
||||||
|
gravity_ParseBlacklistDomains() {
|
||||||
|
local output status
|
||||||
|
|
||||||
|
# Empty $accretionDisc if it already exists, otherwise, create it
|
||||||
|
: > "${piholeDir}/${accretionDisc}"
|
||||||
|
|
||||||
|
if [[ -f "${piholeDir}/${whitelistMatter}" ]]; then
|
||||||
|
mv "${piholeDir}/${whitelistMatter}" "${piholeDir}/${accretionDisc}"
|
||||||
|
else
|
||||||
|
# There was no whitelist file, so use preEventHorizon instead of whitelistMatter.
|
||||||
|
cp "${piholeDir}/${preEventHorizon}" "${piholeDir}/${accretionDisc}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Move the file over as /etc/pihole/gravity.list so dnsmasq can use it
|
||||||
|
output=$( { mv "${piholeDir}/${accretionDisc}" "${adList}"; } 2>&1 )
|
||||||
|
status="$?"
|
||||||
|
|
||||||
|
if [[ "${status}" -ne 0 ]]; then
|
||||||
|
echo -e "\\n ${CROSS} Unable to move ${accretionDisc} from ${piholeDir}\\n ${output}"
|
||||||
|
gravity_Cleanup "error"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create user-added blacklist entries
|
||||||
|
gravity_ParseUserDomains() {
|
||||||
|
if [[ ! -f "${blacklistFile}" ]]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
# Copy the file over as /etc/pihole/black.list so dnsmasq can use it
|
||||||
|
cp "${blacklistFile}" "${blackList}" 2> /dev/null || \
|
||||||
|
echo -e "\\n ${CROSS} Unable to move ${blacklistFile##*/} to ${piholeDir}"
|
||||||
|
}
|
||||||
|
|
||||||
# Trap Ctrl-C
|
# Trap Ctrl-C
|
||||||
gravity_Trap() {
|
gravity_Trap() {
|
||||||
trap '{ echo -e "\\n\\n ${INFO} ${COL_LIGHT_RED}User-abort detected${COL_NC}"; gravity_Cleanup "error"; }' INT
|
trap '{ echo -e "\\n\\n ${INFO} ${COL_LIGHT_RED}User-abort detected${COL_NC}"; gravity_Cleanup "error"; }' INT
|
||||||
@@ -729,7 +622,7 @@ gravity_Cleanup() {
|
|||||||
# Ensure this function only runs when gravity_SetDownloadOptions() has completed
|
# Ensure this function only runs when gravity_SetDownloadOptions() has completed
|
||||||
if [[ "${gravity_Blackbody:-}" == true ]]; then
|
if [[ "${gravity_Blackbody:-}" == true ]]; then
|
||||||
# Remove any unused .domains files
|
# Remove any unused .domains files
|
||||||
for file in "${piholeDir}"/*."${domainsExtension}"; do
|
for file in ${piholeDir}/*.${domainsExtension}; do
|
||||||
# If list is not in active array, then remove it
|
# If list is not in active array, then remove it
|
||||||
if [[ ! "${activeDomains[*]}" == *"${file}"* ]]; then
|
if [[ ! "${activeDomains[*]}" == *"${file}"* ]]; then
|
||||||
rm -f "${file}" 2> /dev/null || \
|
rm -f "${file}" 2> /dev/null || \
|
||||||
@@ -740,28 +633,13 @@ gravity_Cleanup() {
|
|||||||
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
|
||||||
if ${optimize_database} ; then
|
|
||||||
str="Optimizing domains database"
|
|
||||||
echo -ne " ${INFO} ${str}..."
|
|
||||||
# Run VACUUM command on database to optimize it
|
|
||||||
output=$( { sqlite3 "${gravityDBfile}" "VACUUM;"; } 2>&1 )
|
|
||||||
status="$?"
|
|
||||||
|
|
||||||
if [[ "${status}" -ne 0 ]]; then
|
|
||||||
echo -e "\\n ${CROSS} Unable to optimize gravity database ${gravityDBfile}\\n ${output}"
|
|
||||||
error="error"
|
|
||||||
else
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Only restart DNS service if offline
|
# Only restart DNS service if offline
|
||||||
if ! pgrep pihole-FTL &> /dev/null; then
|
if ! pidof ${resolver} &> /dev/null; then
|
||||||
"${PIHOLE_COMMAND}" restartdns
|
"${PIHOLE_COMMAND}" restartdns
|
||||||
dnsWasOffline=true
|
dnsWasOffline=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Print Pi-hole status if an error occurred
|
# Print Pi-hole status if an error occured
|
||||||
if [[ -n "${error}" ]]; then
|
if [[ -n "${error}" ]]; then
|
||||||
"${PIHOLE_COMMAND}" status
|
"${PIHOLE_COMMAND}" status
|
||||||
exit 1
|
exit 1
|
||||||
@@ -781,28 +659,17 @@ Options:
|
|||||||
for var in "$@"; do
|
for var in "$@"; do
|
||||||
case "${var}" in
|
case "${var}" in
|
||||||
"-f" | "--force" ) forceDelete=true;;
|
"-f" | "--force" ) forceDelete=true;;
|
||||||
"-o" | "--optimize" ) optimize_database=true;;
|
|
||||||
"-r" | "--recreate" ) recreate_database=true;;
|
|
||||||
"-h" | "--help" ) helpFunc;;
|
"-h" | "--help" ) helpFunc;;
|
||||||
|
"-sd" | "--skip-download" ) skipDownload=true;;
|
||||||
|
"-b" | "--blacklist-only" ) listType="blacklist";;
|
||||||
|
"-w" | "--whitelist-only" ) listType="whitelist";;
|
||||||
|
"-wild" | "--wildcard-only" ) listType="wildcard"; dnsRestartType="restart";;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
# Trap Ctrl-C
|
# Trap Ctrl-C
|
||||||
gravity_Trap
|
gravity_Trap
|
||||||
|
|
||||||
if [[ "${recreate_database:-}" == true ]]; then
|
|
||||||
str="Restoring from migration backup"
|
|
||||||
echo -ne "${INFO} ${str}..."
|
|
||||||
rm "${gravityDBfile}"
|
|
||||||
pushd "${piholeDir}" > /dev/null || exit
|
|
||||||
cp migration_backup/* .
|
|
||||||
popd > /dev/null || exit
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Move possibly existing legacy files to the gravity database
|
|
||||||
migrate_to_database
|
|
||||||
|
|
||||||
if [[ "${forceDelete:-}" == true ]]; then
|
if [[ "${forceDelete:-}" == true ]]; then
|
||||||
str="Deleting existing list cache"
|
str="Deleting existing list cache"
|
||||||
echo -ne "${INFO} ${str}..."
|
echo -ne "${INFO} ${str}..."
|
||||||
@@ -811,32 +678,56 @@ if [[ "${forceDelete:-}" == true ]]; then
|
|||||||
echo -e "${OVER} ${TICK} ${str}"
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Gravity downloads blocklists next
|
detect_pihole_blocking_status
|
||||||
|
|
||||||
|
# Determine which functions to run
|
||||||
|
if [[ "${skipDownload}" == false ]]; then
|
||||||
|
# Gravity needs to download blocklists
|
||||||
gravity_CheckDNSResolutionAvailable
|
gravity_CheckDNSResolutionAvailable
|
||||||
gravity_DownloadBlocklists
|
gravity_GetBlocklistUrls
|
||||||
|
if [[ "${haveSourceUrls}" == true ]]; then
|
||||||
|
gravity_SetDownloadOptions
|
||||||
|
fi
|
||||||
|
gravity_ConsolidateDownloadedBlocklists
|
||||||
|
gravity_SortAndFilterConsolidatedList
|
||||||
|
else
|
||||||
|
# Gravity needs to modify Blacklist/Whitelist/Wildcards
|
||||||
|
echo -e " ${INFO} Using cached Event Horizon list..."
|
||||||
|
numberOf=$(printf "%'.0f" "$(wc -l < "${piholeDir}/${preEventHorizon}")")
|
||||||
|
echo -e " ${INFO} ${COL_BLUE}${numberOf}${COL_NC} unique domains trapped in the Event Horizon"
|
||||||
|
fi
|
||||||
|
|
||||||
# Create local.list
|
# Perform when downloading blocklists, or modifying the whitelist
|
||||||
gravity_generateLocalList
|
if [[ "${skipDownload}" == false ]] || [[ "${listType}" == "whitelist" ]]; then
|
||||||
|
gravity_Whitelist
|
||||||
|
fi
|
||||||
|
|
||||||
# Migrate rest of the data from old to new database
|
convert_wildcard_to_regex
|
||||||
gravity_swap_databases
|
gravity_ShowBlockCount
|
||||||
|
|
||||||
# Update gravity timestamp
|
# Perform when downloading blocklists, or modifying the white/blacklist (not wildcards)
|
||||||
update_gravity_timestamp
|
if [[ "${skipDownload}" == false ]] || [[ "${listType}" == *"list" ]]; then
|
||||||
|
str="Parsing domains into hosts format"
|
||||||
|
echo -ne " ${INFO} ${str}..."
|
||||||
|
|
||||||
# Ensure proper permissions are set for the database
|
gravity_ParseUserDomains
|
||||||
chown pihole:pihole "${gravityDBfile}"
|
|
||||||
chmod g+w "${piholeDir}" "${gravityDBfile}"
|
|
||||||
|
|
||||||
# Compute numbers to be displayed
|
# Perform when downloading blocklists
|
||||||
gravity_ShowCount
|
if [[ ! "${listType:-}" == "blacklist" ]]; then
|
||||||
|
gravity_ParseLocalDomains
|
||||||
|
gravity_ParseBlacklistDomains
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
|
||||||
|
gravity_Cleanup
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
# Determine if DNS has been restarted by this instance of gravity
|
# Determine if DNS has been restarted by this instance of gravity
|
||||||
if [[ -z "${dnsWasOffline:-}" ]]; then
|
if [[ -z "${dnsWasOffline:-}" ]]; then
|
||||||
"${PIHOLE_COMMAND}" restartdns reload
|
# Use "force-reload" when restarting dnsmasq for everything but Wildcards
|
||||||
|
"${PIHOLE_COMMAND}" restartdns "${dnsRestartType:-force-reload}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
gravity_Cleanup
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
"${PIHOLE_COMMAND}" status
|
"${PIHOLE_COMMAND}" status
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
.TH "Pi-hole" "8" "Pi-hole" "Pi-hole" "April 2020"
|
.TH "Pi-hole" "8" "Pi-hole" "Pi-hole" "May 2018"
|
||||||
.SH "NAME"
|
.SH "NAME"
|
||||||
|
|
||||||
Pi-hole : A black-hole for internet advertisements
|
Pi-hole : A black-hole for internet advertisements
|
||||||
@@ -11,6 +11,8 @@ Pi-hole : A black-hole for internet advertisements
|
|||||||
.br
|
.br
|
||||||
\fBpihole -a\fR (\fB-c|-f|-k\fR)
|
\fBpihole -a\fR (\fB-c|-f|-k\fR)
|
||||||
.br
|
.br
|
||||||
|
\fBpihole -a\fR [\fB-r\fR hostrecord]
|
||||||
|
.br
|
||||||
\fBpihole -a -e\fR email
|
\fBpihole -a -e\fR email
|
||||||
.br
|
.br
|
||||||
\fBpihole -a -i\fR interface
|
\fBpihole -a -i\fR interface
|
||||||
@@ -33,7 +35,7 @@ pihole -g\fR
|
|||||||
.br
|
.br
|
||||||
\fBpihole\fR \fB-l\fR (\fBon|off|off noflush\fR)
|
\fBpihole\fR \fB-l\fR (\fBon|off|off noflush\fR)
|
||||||
.br
|
.br
|
||||||
\fBpihole -up \fR[--check-only]
|
\fBpihole -up \fR[--checkonly]
|
||||||
.br
|
.br
|
||||||
\fBpihole -v\fR [-p|-a|-f] [-c|-l|-hash]
|
\fBpihole -v\fR [-p|-a|-f] [-c|-l|-hash]
|
||||||
.br
|
.br
|
||||||
@@ -41,7 +43,7 @@ pihole -g\fR
|
|||||||
.br
|
.br
|
||||||
pihole status
|
pihole status
|
||||||
.br
|
.br
|
||||||
pihole restartdns\fR [options]
|
pihole restartdns\fR
|
||||||
.br
|
.br
|
||||||
\fBpihole\fR (\fBenable\fR|\fBdisable\fR [time])
|
\fBpihole\fR (\fBenable\fR|\fBdisable\fR [time])
|
||||||
.br
|
.br
|
||||||
@@ -64,24 +66,14 @@ Available commands and options:
|
|||||||
Adds or removes specified domain or domains to the blacklist
|
Adds or removes specified domain or domains to the blacklist
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fB--regex, regex\fR [options] [<regex1> <regex2 ...>]
|
|
||||||
.br
|
|
||||||
Add or removes specified regex filter to the regex blacklist
|
|
||||||
.br
|
|
||||||
|
|
||||||
\fB--white-regex\fR [options] [<regex1> <regex2 ...>]
|
|
||||||
.br
|
|
||||||
Add or removes specified regex filter to the regex whitelist
|
|
||||||
.br
|
|
||||||
|
|
||||||
\fB--wild, wildcard\fR [options] [<domain1> <domain2 ...>]
|
\fB--wild, wildcard\fR [options] [<domain1> <domain2 ...>]
|
||||||
.br
|
.br
|
||||||
Add or removes specified domain to the wildcard blacklist
|
Add or removes specified domain to the wildcard blacklist
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fB--white-wild\fR [options] [<domain1> <domain2 ...>]
|
\fB--regex, regex\fR [options] [<regex1> <regex2 ...>]
|
||||||
.br
|
.br
|
||||||
Add or removes specified domain to the wildcard whitelist
|
Add or removes specified regex filter to the regex blacklist
|
||||||
.br
|
.br
|
||||||
|
|
||||||
(Whitelist/Blacklist manipulation options):
|
(Whitelist/Blacklist manipulation options):
|
||||||
@@ -132,6 +124,9 @@ Available commands and options:
|
|||||||
-f, fahrenheit Set Fahrenheit as preferred temperature unit
|
-f, fahrenheit Set Fahrenheit as preferred temperature unit
|
||||||
.br
|
.br
|
||||||
-k, kelvin Set Kelvin as preferred temperature unit
|
-k, kelvin Set Kelvin as preferred temperature unit
|
||||||
|
.br
|
||||||
|
-r, hostrecord Add a name to the DNS associated to an
|
||||||
|
IPv4/IPv6 address
|
||||||
.br
|
.br
|
||||||
-e, email Set an administrative contact address for the
|
-e, email Set an administrative contact address for the
|
||||||
Block Page
|
Block Page
|
||||||
@@ -187,12 +182,12 @@ Available commands and options:
|
|||||||
|
|
||||||
(Logging options):
|
(Logging options):
|
||||||
.br
|
.br
|
||||||
on Enable the Pi-hole log at /var/log/pihole/pihole.log
|
on Enable the Pi-hole log at /var/log/pihole.log
|
||||||
.br
|
.br
|
||||||
off Disable and flush the Pi-hole log at
|
off Disable and flush the Pi-hole log at
|
||||||
/var/log/pihole/pihole.log
|
/var/log/pihole.log
|
||||||
.br
|
.br
|
||||||
off noflush Disable the Pi-hole log at /var/log/pihole/pihole.log
|
off noflush Disable the Pi-hole log at /var/log/pihole.log
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fB-up, updatePihole\fR [--check-only]
|
\fB-up, updatePihole\fR [--check-only]
|
||||||
@@ -224,7 +219,7 @@ Available commands and options:
|
|||||||
.br
|
.br
|
||||||
-l, --latest Return the latest version
|
-l, --latest Return the latest version
|
||||||
.br
|
.br
|
||||||
--hash Return the GitHub hash from your local
|
--hash Return the Github hash from your local
|
||||||
repositories
|
repositories
|
||||||
.br
|
.br
|
||||||
|
|
||||||
@@ -255,21 +250,14 @@ Available commands and options:
|
|||||||
#m Disable Pi-hole functionality for # minute(s)
|
#m Disable Pi-hole functionality for # minute(s)
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fBrestartdns\fR [options]
|
\fBrestartdns\fR
|
||||||
.br
|
.br
|
||||||
Full restart Pi-hole subsystems. Without any options (see below) a full restart causes config file parsing and history re-reading
|
Restart Pi-hole subsystems
|
||||||
.br
|
|
||||||
|
|
||||||
(restart options):
|
|
||||||
.br
|
|
||||||
reload Updates the lists (incl. HOSTS files) and flushes DNS cache. Does not reparse config files
|
|
||||||
.br
|
|
||||||
reload-lists Updates the lists (excl. HOSTS files) WITHOUT flushing the DNS cache. Does not reparse config files
|
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fBcheckout\fR [repo] [branch]
|
\fBcheckout\fR [repo] [branch]
|
||||||
.br
|
.br
|
||||||
Switch Pi-hole subsystems to a different GitHub branch
|
Switch Pi-hole subsystems to a different Github branch
|
||||||
.br
|
.br
|
||||||
|
|
||||||
(repo options):
|
(repo options):
|
||||||
@@ -363,12 +351,6 @@ Switching Pi-hole subsystem branches
|
|||||||
.br
|
.br
|
||||||
Switch to core development branch
|
Switch to core development branch
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fBpihole arpflush\fR
|
|
||||||
.br
|
|
||||||
Flush information stored in Pi-hole's network tables
|
|
||||||
.br
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
.SH "SEE ALSO"
|
||||||
|
|
||||||
\fBlighttpd\fR(8), \fBpihole-FTL\fR(8)
|
\fBlighttpd\fR(8), \fBpihole-FTL\fR(8)
|
||||||
|
148
pihole
148
pihole
@@ -1,4 +1,4 @@
|
|||||||
#!/usr/bin/env bash
|
#!/bin/bash
|
||||||
|
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
# Pi-hole: A black hole for Internet advertisements
|
||||||
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
|
||||||
@@ -10,16 +10,19 @@
|
|||||||
# Please see LICENSE file for your rights under this license.
|
# Please see LICENSE file for your rights under this license.
|
||||||
|
|
||||||
readonly PI_HOLE_SCRIPT_DIR="/opt/pihole"
|
readonly PI_HOLE_SCRIPT_DIR="/opt/pihole"
|
||||||
|
readonly gravitylist="/etc/pihole/gravity.list"
|
||||||
|
readonly blacklist="/etc/pihole/black.list"
|
||||||
|
|
||||||
# setupVars and PI_HOLE_BIN_DIR are not readonly here because in some functions (checkout),
|
# setupVars is not readonly here because in some funcitons (checkout),
|
||||||
# they might get set again when the installer is sourced. This causes an
|
# it might get set again when the installer is sourced. This causes an
|
||||||
# error due to modifying a readonly variable.
|
# error due to modifying a readonly variable.
|
||||||
setupVars="/etc/pihole/setupVars.conf"
|
setupVars="/etc/pihole/setupVars.conf"
|
||||||
PI_HOLE_BIN_DIR="/usr/local/bin"
|
|
||||||
|
|
||||||
readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE"
|
readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE"
|
||||||
source "${colfile}"
|
source "${colfile}"
|
||||||
|
|
||||||
|
resolver="pihole-FTL"
|
||||||
|
|
||||||
webpageFunc() {
|
webpageFunc() {
|
||||||
source "${PI_HOLE_SCRIPT_DIR}/webpage.sh"
|
source "${PI_HOLE_SCRIPT_DIR}/webpage.sh"
|
||||||
main "$@"
|
main "$@"
|
||||||
@@ -53,11 +56,6 @@ flushFunc() {
|
|||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
arpFunc() {
|
|
||||||
"${PI_HOLE_SCRIPT_DIR}"/piholeARPTable.sh "$@"
|
|
||||||
exit 0
|
|
||||||
}
|
|
||||||
|
|
||||||
updatePiholeFunc() {
|
updatePiholeFunc() {
|
||||||
shift
|
shift
|
||||||
"${PI_HOLE_SCRIPT_DIR}"/update.sh "$@"
|
"${PI_HOLE_SCRIPT_DIR}"/update.sh "$@"
|
||||||
@@ -100,28 +98,24 @@ versionFunc() {
|
|||||||
|
|
||||||
restartDNS() {
|
restartDNS() {
|
||||||
local svcOption svc str output status
|
local svcOption svc str output status
|
||||||
svcOption="${1:-restart}"
|
svcOption="${1:-}"
|
||||||
|
|
||||||
# Determine if we should reload or restart
|
# Determine if we should reload or restart restart
|
||||||
if [[ "${svcOption}" =~ "reload-lists" ]]; then
|
if [[ "${svcOption}" =~ "reload" ]]; then
|
||||||
# Reloading of the lists has been requested
|
# Using SIGHUP will NOT re-read any *.conf files
|
||||||
# Note 1: This will NOT re-read any *.conf files
|
svc="killall -s SIGHUP ${resolver}"
|
||||||
# Note 2: We cannot use killall here as it does
|
|
||||||
# not know about real-time signals
|
|
||||||
svc="pkill -RTMIN pihole-FTL"
|
|
||||||
str="Reloading DNS lists"
|
|
||||||
elif [[ "${svcOption}" =~ "reload" ]]; then
|
|
||||||
# Reloading of the DNS cache has been requested
|
|
||||||
# Note: This will NOT re-read any *.conf files
|
|
||||||
svc="pkill -HUP pihole-FTL"
|
|
||||||
str="Flushing DNS cache"
|
|
||||||
else
|
else
|
||||||
# A full restart has been requested
|
# Get PID of resolver to determine if it needs to start or restart
|
||||||
svc="service pihole-FTL restart"
|
if pidof pihole-FTL &> /dev/null; then
|
||||||
str="Restarting DNS server"
|
svcOption="restart"
|
||||||
|
else
|
||||||
|
svcOption="start"
|
||||||
|
fi
|
||||||
|
svc="service ${resolver} ${svcOption}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Print output to Terminal, but not to Web Admin
|
# Print output to Terminal, but not to Web Admin
|
||||||
|
str="${svcOption^}ing DNS service"
|
||||||
[[ -t 1 ]] && echo -ne " ${INFO} ${str}..."
|
[[ -t 1 ]] && echo -ne " ${INFO} ${str}..."
|
||||||
|
|
||||||
output=$( { ${svc}; } 2>&1 )
|
output=$( { ${svc}; } 2>&1 )
|
||||||
@@ -154,6 +148,14 @@ Time:
|
|||||||
echo -e " ${INFO} Blocking already disabled, nothing to do"
|
echo -e " ${INFO} Blocking already disabled, nothing to do"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
if [[ -e "${gravitylist}" ]]; then
|
||||||
|
mv "${gravitylist}" "${gravitylist}.bck"
|
||||||
|
echo "" > "${gravitylist}"
|
||||||
|
fi
|
||||||
|
if [[ -e "${blacklist}" ]]; then
|
||||||
|
mv "${blacklist}" "${blacklist}.bck"
|
||||||
|
echo "" > "${blacklist}"
|
||||||
|
fi
|
||||||
if [[ $# > 1 ]]; then
|
if [[ $# > 1 ]]; then
|
||||||
local error=false
|
local error=false
|
||||||
if [[ "${2}" == *"s" ]]; then
|
if [[ "${2}" == *"s" ]]; then
|
||||||
@@ -162,7 +164,7 @@ Time:
|
|||||||
local str="Disabling blocking for ${tt} seconds"
|
local str="Disabling blocking for ${tt} seconds"
|
||||||
echo -e " ${INFO} ${str}..."
|
echo -e " ${INFO} ${str}..."
|
||||||
local str="Blocking will be re-enabled in ${tt} seconds"
|
local str="Blocking will be re-enabled in ${tt} seconds"
|
||||||
nohup "${PI_HOLE_SCRIPT_DIR}"/pihole-reenable.sh ${tt} </dev/null &>/dev/null &
|
nohup bash -c "sleep ${tt}; pihole enable" </dev/null &>/dev/null &
|
||||||
else
|
else
|
||||||
local error=true
|
local error=true
|
||||||
fi
|
fi
|
||||||
@@ -173,7 +175,7 @@ Time:
|
|||||||
echo -e " ${INFO} ${str}..."
|
echo -e " ${INFO} ${str}..."
|
||||||
local str="Blocking will be re-enabled in ${tt} minutes"
|
local str="Blocking will be re-enabled in ${tt} minutes"
|
||||||
tt=$((${tt}*60))
|
tt=$((${tt}*60))
|
||||||
nohup "${PI_HOLE_SCRIPT_DIR}"/pihole-reenable.sh ${tt} </dev/null &>/dev/null &
|
nohup bash -c "sleep ${tt}; pihole enable" </dev/null &>/dev/null &
|
||||||
else
|
else
|
||||||
local error=true
|
local error=true
|
||||||
fi
|
fi
|
||||||
@@ -195,7 +197,6 @@ Time:
|
|||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# Enable Pi-hole
|
# Enable Pi-hole
|
||||||
killall -q pihole-reenable
|
|
||||||
if grep -cq "BLOCKING_ENABLED=true" "${setupVars}"; then
|
if grep -cq "BLOCKING_ENABLED=true" "${setupVars}"; then
|
||||||
echo -e " ${INFO} Blocking already enabled, nothing to do"
|
echo -e " ${INFO} Blocking already enabled, nothing to do"
|
||||||
exit 0
|
exit 0
|
||||||
@@ -203,6 +204,12 @@ Time:
|
|||||||
echo -e " ${INFO} Enabling blocking"
|
echo -e " ${INFO} Enabling blocking"
|
||||||
local str="Pi-hole Enabled"
|
local str="Pi-hole Enabled"
|
||||||
|
|
||||||
|
if [[ -e "${gravitylist}.bck" ]]; then
|
||||||
|
mv "${gravitylist}.bck" "${gravitylist}"
|
||||||
|
fi
|
||||||
|
if [[ -e "${blacklist}.bck" ]]; then
|
||||||
|
mv "${blacklist}.bck" "${blacklist}"
|
||||||
|
fi
|
||||||
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}"
|
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}"
|
||||||
echo "BLOCKING_ENABLED=true" >> "${setupVars}"
|
echo "BLOCKING_ENABLED=true" >> "${setupVars}"
|
||||||
fi
|
fi
|
||||||
@@ -220,9 +227,9 @@ Example: 'pihole logging on'
|
|||||||
Specify whether the Pi-hole log should be used
|
Specify whether the Pi-hole log should be used
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
on Enable the Pi-hole log at /var/log/pihole/pihole.log
|
on Enable the Pi-hole log at /var/log/pihole.log
|
||||||
off Disable and flush the Pi-hole log at /var/log/pihole/pihole.log
|
off Disable and flush the Pi-hole log at /var/log/pihole.log
|
||||||
off noflush Disable the Pi-hole log at /var/log/pihole/pihole.log"
|
off noflush Disable the Pi-hole log at /var/log/pihole.log"
|
||||||
exit 0
|
exit 0
|
||||||
elif [[ "${1}" == "off" ]]; then
|
elif [[ "${1}" == "off" ]]; then
|
||||||
# Disable logging
|
# Disable logging
|
||||||
@@ -230,7 +237,7 @@ Options:
|
|||||||
sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf
|
sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf
|
||||||
if [[ "${2}" != "noflush" ]]; then
|
if [[ "${2}" != "noflush" ]]; then
|
||||||
# Flush logs
|
# Flush logs
|
||||||
"${PI_HOLE_BIN_DIR}"/pihole -f
|
pihole -f
|
||||||
fi
|
fi
|
||||||
echo -e " ${INFO} Disabling logging..."
|
echo -e " ${INFO} Disabling logging..."
|
||||||
local str="Logging has been disabled!"
|
local str="Logging has been disabled!"
|
||||||
@@ -249,47 +256,16 @@ Options:
|
|||||||
echo -e "${OVER} ${TICK} ${str}"
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
}
|
}
|
||||||
|
|
||||||
analyze_ports() {
|
|
||||||
# FTL is listening at least on at least one port when this
|
|
||||||
# function is getting called
|
|
||||||
echo -e " ${TICK} DNS service is listening"
|
|
||||||
# Check individual address family/protocol combinations
|
|
||||||
# For a healthy Pi-hole, they should all be up (nothing printed)
|
|
||||||
if grep -q "IPv4.*UDP" <<< "${1}"; then
|
|
||||||
echo -e " ${TICK} UDP (IPv4)"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} UDP (IPv4)"
|
|
||||||
fi
|
|
||||||
if grep -q "IPv4.*TCP" <<< "${1}"; then
|
|
||||||
echo -e " ${TICK} TCP (IPv4)"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} TCP (IPv4)"
|
|
||||||
fi
|
|
||||||
if grep -q "IPv6.*UDP" <<< "${1}"; then
|
|
||||||
echo -e " ${TICK} UDP (IPv6)"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} UDP (IPv6)"
|
|
||||||
fi
|
|
||||||
if grep -q "IPv6.*TCP" <<< "${1}"; then
|
|
||||||
echo -e " ${TICK} TCP (IPv6)"
|
|
||||||
else
|
|
||||||
echo -e " ${CROSS} TCP (IPv6)"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
}
|
|
||||||
|
|
||||||
statusFunc() {
|
statusFunc() {
|
||||||
# Determine if there is a pihole service is listening on port 53
|
# Determine if service is running on port 53 (Cr: https://superuser.com/a/806331)
|
||||||
local listening
|
if (echo > /dev/tcp/127.0.0.1/53) >/dev/null 2>&1; then
|
||||||
listening="$(lsof -Pni:53)"
|
|
||||||
if grep -q "pihole" <<< "${listening}"; then
|
|
||||||
if [[ "${1}" != "web" ]]; then
|
if [[ "${1}" != "web" ]]; then
|
||||||
analyze_ports "${listening}"
|
echo -e " ${TICK} DNS service is running"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
case "${1}" in
|
case "${1}" in
|
||||||
"web") echo "-1";;
|
"web") echo "-1";;
|
||||||
*) echo -e " ${CROSS} DNS service is NOT listening";;
|
*) echo -e " ${CROSS} DNS service is NOT running";;
|
||||||
esac
|
esac
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
@@ -299,13 +275,13 @@ statusFunc() {
|
|||||||
# A config is commented out
|
# A config is commented out
|
||||||
case "${1}" in
|
case "${1}" in
|
||||||
"web") echo 0;;
|
"web") echo 0;;
|
||||||
*) echo -e " ${CROSS} Pi-hole blocking is disabled";;
|
*) echo -e " ${CROSS} Pi-hole blocking is Disabled";;
|
||||||
esac
|
esac
|
||||||
elif grep -q "BLOCKING_ENABLED=true" /etc/pihole/setupVars.conf; then
|
elif grep -q "BLOCKING_ENABLED=true" /etc/pihole/setupVars.conf; then
|
||||||
# Configs are set
|
# Configs are set
|
||||||
case "${1}" in
|
case "${1}" in
|
||||||
"web") echo 1;;
|
"web") echo 1;;
|
||||||
*) echo -e " ${TICK} Pi-hole blocking is enabled";;
|
*) echo -e " ${TICK} Pi-hole blocking is Enabled";;
|
||||||
esac
|
esac
|
||||||
else
|
else
|
||||||
# No configs were found
|
# No configs were found
|
||||||
@@ -314,7 +290,7 @@ statusFunc() {
|
|||||||
*) echo -e " ${INFO} Pi-hole blocking will be enabled";;
|
*) echo -e " ${INFO} Pi-hole blocking will be enabled";;
|
||||||
esac
|
esac
|
||||||
# Enable blocking
|
# Enable blocking
|
||||||
"${PI_HOLE_BIN_DIR}"/pihole enable
|
pihole enable
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -332,12 +308,12 @@ tailFunc() {
|
|||||||
source /etc/pihole/setupVars.conf
|
source /etc/pihole/setupVars.conf
|
||||||
|
|
||||||
# Strip date from each line
|
# Strip date from each line
|
||||||
# Color blocklist/blacklist/wildcard entries as red
|
# Colour blocklist/blacklist/wildcard entries as red
|
||||||
# Color A/AAAA/DHCP strings as white
|
# Colour A/AAAA/DHCP strings as white
|
||||||
# Color everything else as gray
|
# Colour everything else as gray
|
||||||
tail -f /var/log/pihole/pihole.log | sed -E \
|
tail -f /var/log/pihole.log | sed -E \
|
||||||
-e "s,($(date +'%b %d ')| dnsmasq\[[0-9]*\]),,g" \
|
-e "s,($(date +'%b %d ')| dnsmasq[.*[0-9]]),,g" \
|
||||||
-e "s,(.*(blacklisted |gravity blocked ).* is (0.0.0.0|::|NXDOMAIN|${IPV4_ADDRESS%/*}|${IPV6_ADDRESS:-NULL}).*),${COL_RED}&${COL_NC}," \
|
-e "s,(.*(gravity.list|black.list|regex.list| config ).* is (0.0.0.0|::|NXDOMAIN|${IPV4_ADDRESS%/*}|${IPV6_ADDRESS:-NULL}).*),${COL_RED}&${COL_NC}," \
|
||||||
-e "s,.*(query\\[A|DHCP).*,${COL_NC}&${COL_NC}," \
|
-e "s,.*(query\\[A|DHCP).*,${COL_NC}&${COL_NC}," \
|
||||||
-e "s,.*,${COL_GRAY}&${COL_NC},"
|
-e "s,.*,${COL_GRAY}&${COL_NC},"
|
||||||
exit 0
|
exit 0
|
||||||
@@ -347,7 +323,7 @@ piholeCheckoutFunc() {
|
|||||||
if [[ "$2" == "-h" ]] || [[ "$2" == "--help" ]]; then
|
if [[ "$2" == "-h" ]] || [[ "$2" == "--help" ]]; then
|
||||||
echo "Usage: pihole checkout [repo] [branch]
|
echo "Usage: pihole checkout [repo] [branch]
|
||||||
Example: 'pihole checkout master' or 'pihole checkout core dev'
|
Example: 'pihole checkout master' or 'pihole checkout core dev'
|
||||||
Switch Pi-hole subsystems to a different GitHub branch
|
Switch Pi-hole subsystems to a different Github branch
|
||||||
|
|
||||||
Repositories:
|
Repositories:
|
||||||
core [branch] Change the branch of Pi-hole's core subsystem
|
core [branch] Change the branch of Pi-hole's core subsystem
|
||||||
@@ -410,10 +386,8 @@ Add '-h' after specific commands for more information on usage
|
|||||||
Whitelist/Blacklist Options:
|
Whitelist/Blacklist Options:
|
||||||
-w, whitelist Whitelist domain(s)
|
-w, whitelist Whitelist domain(s)
|
||||||
-b, blacklist Blacklist domain(s)
|
-b, blacklist Blacklist domain(s)
|
||||||
--regex, regex Regex blacklist domains(s)
|
|
||||||
--white-regex Regex whitelist domains(s)
|
|
||||||
--wild, wildcard Wildcard blacklist domain(s)
|
--wild, wildcard Wildcard blacklist domain(s)
|
||||||
--white-wild Wildcard whitelist domain(s)
|
--regex, regex Regex blacklist domains(s)
|
||||||
Add '-h' for more info on whitelist/blacklist usage
|
Add '-h' for more info on whitelist/blacklist usage
|
||||||
|
|
||||||
Debugging Options:
|
Debugging Options:
|
||||||
@@ -443,12 +417,9 @@ Options:
|
|||||||
enable Enable Pi-hole subsystems
|
enable Enable Pi-hole subsystems
|
||||||
disable Disable Pi-hole subsystems
|
disable Disable Pi-hole subsystems
|
||||||
Add '-h' for more info on disable usage
|
Add '-h' for more info on disable usage
|
||||||
restartdns Full restart Pi-hole subsystems
|
restartdns Restart Pi-hole subsystems
|
||||||
Add 'reload' to update the lists and flush the cache without restarting the DNS server
|
checkout Switch Pi-hole subsystems to a different Github branch
|
||||||
Add 'reload-lists' to only update the lists WITHOUT flushing the cache or restarting the DNS server
|
Add '-h' for more info on checkout usage";
|
||||||
checkout Switch Pi-hole subsystems to a different GitHub branch
|
|
||||||
Add '-h' for more info on checkout usage
|
|
||||||
arpflush Flush information stored in Pi-hole's network tables";
|
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -477,8 +448,6 @@ case "${1}" in
|
|||||||
"-b" | "blacklist" ) listFunc "$@";;
|
"-b" | "blacklist" ) listFunc "$@";;
|
||||||
"--wild" | "wildcard" ) listFunc "$@";;
|
"--wild" | "wildcard" ) listFunc "$@";;
|
||||||
"--regex" | "regex" ) listFunc "$@";;
|
"--regex" | "regex" ) listFunc "$@";;
|
||||||
"--white-regex" | "white-regex" ) listFunc "$@";;
|
|
||||||
"--white-wild" | "white-wild" ) listFunc "$@";;
|
|
||||||
"-d" | "debug" ) debugFunc "$@";;
|
"-d" | "debug" ) debugFunc "$@";;
|
||||||
"-f" | "flush" ) flushFunc "$@";;
|
"-f" | "flush" ) flushFunc "$@";;
|
||||||
"-up" | "updatePihole" ) updatePiholeFunc "$@";;
|
"-up" | "updatePihole" ) updatePiholeFunc "$@";;
|
||||||
@@ -499,6 +468,5 @@ case "${1}" in
|
|||||||
"checkout" ) piholeCheckoutFunc "$@";;
|
"checkout" ) piholeCheckoutFunc "$@";;
|
||||||
"tricorder" ) tricorderFunc;;
|
"tricorder" ) tricorderFunc;;
|
||||||
"updatechecker" ) updateCheckFunc "$@";;
|
"updatechecker" ) updateCheckFunc "$@";;
|
||||||
"arpflush" ) arpFunc "$@";;
|
|
||||||
* ) helpFunc;;
|
* ) helpFunc;;
|
||||||
esac
|
esac
|
||||||
|
@@ -1,5 +0,0 @@
|
|||||||
Raspbian=9,10
|
|
||||||
Ubuntu=16,18,20
|
|
||||||
Debian=9,10
|
|
||||||
Fedora=31,32
|
|
||||||
CentOS=7,8
|
|
@@ -7,11 +7,11 @@ From command line all you need to do is:
|
|||||||
- `pip install tox`
|
- `pip install tox`
|
||||||
- `tox`
|
- `tox`
|
||||||
|
|
||||||
Tox handles setting up a virtual environment for python dependencies, installing dependencies, building the docker images used by tests, and finally running tests. It's an easy way to have travis-ci like build behavior locally.
|
Tox handles setting up a virtual environment for python dependancies, installing dependancies, building the docker images used by tests, and finally running tests. It's an easy way to have travis-ci like build behavior locally.
|
||||||
|
|
||||||
## Alternative py.test method of running tests
|
## Alternative py.test method of running tests
|
||||||
|
|
||||||
You're responsible for setting up your virtual env and dependencies in this situation.
|
You're responsible for setting up your virtual env and dependancies in this situation.
|
||||||
|
|
||||||
```
|
```
|
||||||
py.test -vv -n auto -m "build_stage"
|
py.test -vv -n auto -m "build_stage"
|
||||||
|
@@ -14,9 +14,9 @@ SETUPVARS = {
|
|||||||
'PIHOLE_DNS_2': '4.2.2.2'
|
'PIHOLE_DNS_2': '4.2.2.2'
|
||||||
}
|
}
|
||||||
|
|
||||||
tick_box = "[\x1b[1;32m\u2713\x1b[0m]"
|
tick_box = "[\x1b[1;32m\xe2\x9c\x93\x1b[0m]".decode("utf-8")
|
||||||
cross_box = "[\x1b[1;31m\u2717\x1b[0m]"
|
cross_box = "[\x1b[1;31m\xe2\x9c\x97\x1b[0m]".decode("utf-8")
|
||||||
info_box = "[i]"
|
info_box = "[i]".decode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
@@ -38,7 +38,9 @@ def Pihole(Docker):
|
|||||||
return out
|
return out
|
||||||
|
|
||||||
funcType = type(Docker.run)
|
funcType = type(Docker.run)
|
||||||
Docker.run = funcType(run_bash, Docker)
|
Docker.run = funcType(run_bash,
|
||||||
|
Docker,
|
||||||
|
testinfra.backend.docker.DockerBackend)
|
||||||
return Docker
|
return Docker
|
||||||
|
|
||||||
|
|
||||||
@@ -104,7 +106,7 @@ def mock_command(script, args, container):
|
|||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1" in'''.format(script=script))
|
case "\$1" in'''.format(script=script))
|
||||||
for k, v in args.items():
|
for k, v in args.iteritems():
|
||||||
case = dedent('''
|
case = dedent('''
|
||||||
{arg})
|
{arg})
|
||||||
echo {res}
|
echo {res}
|
||||||
@@ -131,7 +133,7 @@ def mock_command_2(script, args, container):
|
|||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1 \$2" in'''.format(script=script))
|
case "\$1 \$2" in'''.format(script=script))
|
||||||
for k, v in args.items():
|
for k, v in args.iteritems():
|
||||||
case = dedent('''
|
case = dedent('''
|
||||||
\"{arg}\")
|
\"{arg}\")
|
||||||
echo \"{res}\"
|
echo \"{res}\"
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
FROM fedora:30
|
FROM fedora:latest
|
||||||
|
|
||||||
ENV GITDIR /etc/.pihole
|
ENV GITDIR /etc/.pihole
|
||||||
ENV SCRIPTDIR /opt/pihole
|
ENV SCRIPTDIR /opt/pihole
|
||||||
|
@@ -18,6 +18,6 @@ run_local = testinfra.get_backend(
|
|||||||
def test_build_pihole_image(image, tag):
|
def test_build_pihole_image(image, tag):
|
||||||
build_cmd = run_local('docker build -f {} -t {} .'.format(image, tag))
|
build_cmd = run_local('docker build -f {} -t {} .'.format(image, tag))
|
||||||
if build_cmd.rc != 0:
|
if build_cmd.rc != 0:
|
||||||
print(build_cmd.stdout)
|
print build_cmd.stdout
|
||||||
print(build_cmd.stderr)
|
print build_cmd.stderr
|
||||||
assert build_cmd.rc == 0
|
assert build_cmd.rc == 0
|
||||||
|
@@ -1,6 +1,6 @@
|
|||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
import re
|
import re
|
||||||
from .conftest import (
|
from conftest import (
|
||||||
SETUPVARS,
|
SETUPVARS,
|
||||||
tick_box,
|
tick_box,
|
||||||
info_box,
|
info_box,
|
||||||
@@ -34,7 +34,7 @@ def test_setupVars_are_sourced_to_global_scope(Pihole):
|
|||||||
This confirms the sourced variables are in scope between functions
|
This confirms the sourced variables are in scope between functions
|
||||||
'''
|
'''
|
||||||
setup_var_file = 'cat <<EOF> /etc/pihole/setupVars.conf\n'
|
setup_var_file = 'cat <<EOF> /etc/pihole/setupVars.conf\n'
|
||||||
for k, v in SETUPVARS.items():
|
for k, v in SETUPVARS.iteritems():
|
||||||
setup_var_file += "{}={}\n".format(k, v)
|
setup_var_file += "{}={}\n".format(k, v)
|
||||||
setup_var_file += "EOF\n"
|
setup_var_file += "EOF\n"
|
||||||
Pihole.run(setup_var_file)
|
Pihole.run(setup_var_file)
|
||||||
@@ -59,7 +59,7 @@ def test_setupVars_are_sourced_to_global_scope(Pihole):
|
|||||||
|
|
||||||
output = run_script(Pihole, script).stdout
|
output = run_script(Pihole, script).stdout
|
||||||
|
|
||||||
for k, v in SETUPVARS.items():
|
for k, v in SETUPVARS.iteritems():
|
||||||
assert "{}={}".format(k, v) in output
|
assert "{}={}".format(k, v) in output
|
||||||
|
|
||||||
|
|
||||||
@@ -69,7 +69,7 @@ def test_setupVars_saved_to_file(Pihole):
|
|||||||
'''
|
'''
|
||||||
# dedent works better with this and padding matching script below
|
# dedent works better with this and padding matching script below
|
||||||
set_setup_vars = '\n'
|
set_setup_vars = '\n'
|
||||||
for k, v in SETUPVARS.items():
|
for k, v in SETUPVARS.iteritems():
|
||||||
set_setup_vars += " {}={}\n".format(k, v)
|
set_setup_vars += " {}={}\n".format(k, v)
|
||||||
Pihole.run(set_setup_vars).stdout
|
Pihole.run(set_setup_vars).stdout
|
||||||
|
|
||||||
@@ -88,20 +88,239 @@ def test_setupVars_saved_to_file(Pihole):
|
|||||||
|
|
||||||
output = run_script(Pihole, script).stdout
|
output = run_script(Pihole, script).stdout
|
||||||
|
|
||||||
for k, v in SETUPVARS.items():
|
for k, v in SETUPVARS.iteritems():
|
||||||
assert "{}={}".format(k, v) in output
|
assert "{}={}".format(k, v) in output
|
||||||
|
|
||||||
|
|
||||||
def test_selinux_not_detected(Pihole):
|
def test_configureFirewall_firewalld_running_no_errors(Pihole):
|
||||||
'''
|
'''
|
||||||
confirms installer continues when SELinux configuration file does not exist
|
confirms firewalld rules are applied when firewallD is running
|
||||||
'''
|
'''
|
||||||
|
# firewallD returns 'running' as status
|
||||||
|
mock_command('firewall-cmd', {'*': ('running', 0)}, Pihole)
|
||||||
|
# Whiptail dialog returns Ok for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', 0)}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'Configuring FirewallD for httpd and pihole-FTL'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
firewall_calls = Pihole.run('cat /var/log/firewall-cmd').stdout
|
||||||
|
assert 'firewall-cmd --state' in firewall_calls
|
||||||
|
assert ('firewall-cmd '
|
||||||
|
'--permanent '
|
||||||
|
'--add-service=http '
|
||||||
|
'--add-service=dns') in firewall_calls
|
||||||
|
assert 'firewall-cmd --reload' in firewall_calls
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_firewalld_disabled_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms firewalld rules are not applied when firewallD is not running
|
||||||
|
'''
|
||||||
|
# firewallD returns non-running status
|
||||||
|
mock_command('firewall-cmd', {'*': ('not running', '1')}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = ('No active firewall detected.. '
|
||||||
|
'skipping firewall configuration')
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_firewalld_enabled_declined_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms firewalld rules are not applied when firewallD is running, user
|
||||||
|
declines ruleset
|
||||||
|
'''
|
||||||
|
# firewallD returns running status
|
||||||
|
mock_command('firewall-cmd', {'*': ('running', 0)}, Pihole)
|
||||||
|
# Whiptail dialog returns Cancel for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', 1)}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'Not installing firewall rulesets.'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_no_firewall(Pihole):
|
||||||
|
''' confirms firewall skipped no daemon is running '''
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'No active firewall detected'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_IPTables_enabled_declined_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms IPTables rules are not applied when IPTables is running, user
|
||||||
|
declines ruleset
|
||||||
|
'''
|
||||||
|
# iptables command exists
|
||||||
|
mock_command('iptables', {'*': ('', '0')}, Pihole)
|
||||||
|
# modinfo returns always true (ip_tables module check)
|
||||||
|
mock_command('modinfo', {'*': ('', '0')}, Pihole)
|
||||||
|
# Whiptail dialog returns Cancel for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', '1')}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'Not installing firewall rulesets.'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_IPTables_enabled_rules_exist_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms IPTables rules are not applied when IPTables is running and rules
|
||||||
|
exist
|
||||||
|
'''
|
||||||
|
# iptables command exists and returns 0 on calls
|
||||||
|
# (should return 0 on iptables -C)
|
||||||
|
mock_command('iptables', {'-S': ('-P INPUT DENY', '0')}, Pihole)
|
||||||
|
# modinfo returns always true (ip_tables module check)
|
||||||
|
mock_command('modinfo', {'*': ('', '0')}, Pihole)
|
||||||
|
# Whiptail dialog returns Cancel for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', '0')}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'Installing new IPTables firewall rulesets'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
firewall_calls = Pihole.run('cat /var/log/iptables').stdout
|
||||||
|
# General call type occurances
|
||||||
|
assert len(re.findall(r'iptables -S', firewall_calls)) == 1
|
||||||
|
assert len(re.findall(r'iptables -C', firewall_calls)) == 4
|
||||||
|
assert len(re.findall(r'iptables -I', firewall_calls)) == 0
|
||||||
|
|
||||||
|
# Specific port call occurances
|
||||||
|
assert len(re.findall(r'tcp --dport 80', firewall_calls)) == 1
|
||||||
|
assert len(re.findall(r'tcp --dport 53', firewall_calls)) == 1
|
||||||
|
assert len(re.findall(r'udp --dport 53', firewall_calls)) == 1
|
||||||
|
assert len(re.findall(r'tcp --dport 4711:4720', firewall_calls)) == 1
|
||||||
|
|
||||||
|
|
||||||
|
def test_configureFirewall_IPTables_enabled_not_exist_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms IPTables rules are applied when IPTables is running and rules do
|
||||||
|
not exist
|
||||||
|
'''
|
||||||
|
# iptables command and returns 0 on calls (should return 1 on iptables -C)
|
||||||
|
mock_command(
|
||||||
|
'iptables',
|
||||||
|
{
|
||||||
|
'-S': (
|
||||||
|
'-P INPUT DENY',
|
||||||
|
'0'
|
||||||
|
),
|
||||||
|
'-C': (
|
||||||
|
'',
|
||||||
|
1
|
||||||
|
),
|
||||||
|
'-I': (
|
||||||
|
'',
|
||||||
|
0
|
||||||
|
)
|
||||||
|
},
|
||||||
|
Pihole
|
||||||
|
)
|
||||||
|
# modinfo returns always true (ip_tables module check)
|
||||||
|
mock_command('modinfo', {'*': ('', '0')}, Pihole)
|
||||||
|
# Whiptail dialog returns Cancel for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', '0')}, Pihole)
|
||||||
|
configureFirewall = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
configureFirewall
|
||||||
|
''')
|
||||||
|
expected_stdout = 'Installing new IPTables firewall rulesets'
|
||||||
|
assert expected_stdout in configureFirewall.stdout
|
||||||
|
firewall_calls = Pihole.run('cat /var/log/iptables').stdout
|
||||||
|
# General call type occurances
|
||||||
|
assert len(re.findall(r'iptables -S', firewall_calls)) == 1
|
||||||
|
assert len(re.findall(r'iptables -C', firewall_calls)) == 4
|
||||||
|
assert len(re.findall(r'iptables -I', firewall_calls)) == 4
|
||||||
|
|
||||||
|
# Specific port call occurances
|
||||||
|
assert len(re.findall(r'tcp --dport 80', firewall_calls)) == 2
|
||||||
|
assert len(re.findall(r'tcp --dport 53', firewall_calls)) == 2
|
||||||
|
assert len(re.findall(r'udp --dport 53', firewall_calls)) == 2
|
||||||
|
assert len(re.findall(r'tcp --dport 4711:4720', firewall_calls)) == 2
|
||||||
|
|
||||||
|
|
||||||
|
def test_selinux_enforcing_default_exit(Pihole):
|
||||||
|
'''
|
||||||
|
confirms installer prompts to exit when SELinux is Enforcing by default
|
||||||
|
'''
|
||||||
|
# getenforce returns the running state of SELinux
|
||||||
|
mock_command('getenforce', {'*': ('Enforcing', '0')}, Pihole)
|
||||||
|
# Whiptail dialog returns Cancel for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', '1')}, Pihole)
|
||||||
check_selinux = Pihole.run('''
|
check_selinux = Pihole.run('''
|
||||||
rm -f /etc/selinux/config
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
checkSelinux
|
checkSelinux
|
||||||
''')
|
''')
|
||||||
expected_stdout = info_box + ' SELinux not detected'
|
expected_stdout = info_box + ' SELinux mode detected: Enforcing'
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
expected_stdout = 'SELinux Enforcing detected, exiting installer'
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
assert check_selinux.rc == 1
|
||||||
|
|
||||||
|
|
||||||
|
def test_selinux_enforcing_continue(Pihole):
|
||||||
|
'''
|
||||||
|
confirms installer prompts to continue with custom policy warning
|
||||||
|
'''
|
||||||
|
# getenforce returns the running state of SELinux
|
||||||
|
mock_command('getenforce', {'*': ('Enforcing', '0')}, Pihole)
|
||||||
|
# Whiptail dialog returns Continue for user prompt
|
||||||
|
mock_command('whiptail', {'*': ('', '0')}, Pihole)
|
||||||
|
check_selinux = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
checkSelinux
|
||||||
|
''')
|
||||||
|
expected_stdout = info_box + ' SELinux mode detected: Enforcing'
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
expected_stdout = info_box + (' Continuing installation with SELinux '
|
||||||
|
'Enforcing')
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
expected_stdout = info_box + (' Please refer to official SELinux '
|
||||||
|
'documentation to create a custom policy')
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
assert check_selinux.rc == 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_selinux_permissive(Pihole):
|
||||||
|
'''
|
||||||
|
confirms installer continues when SELinux is Permissive
|
||||||
|
'''
|
||||||
|
# getenforce returns the running state of SELinux
|
||||||
|
mock_command('getenforce', {'*': ('Permissive', '0')}, Pihole)
|
||||||
|
check_selinux = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
checkSelinux
|
||||||
|
''')
|
||||||
|
expected_stdout = info_box + ' SELinux mode detected: Permissive'
|
||||||
|
assert expected_stdout in check_selinux.stdout
|
||||||
|
assert check_selinux.rc == 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_selinux_disabled(Pihole):
|
||||||
|
'''
|
||||||
|
confirms installer continues when SELinux is Disabled
|
||||||
|
'''
|
||||||
|
mock_command('getenforce', {'*': ('Disabled', '0')}, Pihole)
|
||||||
|
check_selinux = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
checkSelinux
|
||||||
|
''')
|
||||||
|
expected_stdout = info_box + ' SELinux mode detected: Disabled'
|
||||||
assert expected_stdout in check_selinux.stdout
|
assert expected_stdout in check_selinux.stdout
|
||||||
assert check_selinux.rc == 0
|
assert check_selinux.rc == 0
|
||||||
|
|
||||||
@@ -119,7 +338,7 @@ def test_installPiholeWeb_fresh_install_no_errors(Pihole):
|
|||||||
expected_stdout = tick_box + (' Creating directory for blocking page, '
|
expected_stdout = tick_box + (' Creating directory for blocking page, '
|
||||||
'and copying files')
|
'and copying files')
|
||||||
assert expected_stdout in installWeb.stdout
|
assert expected_stdout in installWeb.stdout
|
||||||
expected_stdout = info_box + ' Backing up index.lighttpd.html'
|
expected_stdout = cross_box + ' Backing up index.lighttpd.html'
|
||||||
assert expected_stdout in installWeb.stdout
|
assert expected_stdout in installWeb.stdout
|
||||||
expected_stdout = ('No default index.lighttpd.html file found... '
|
expected_stdout = ('No default index.lighttpd.html file found... '
|
||||||
'not backing up')
|
'not backing up')
|
||||||
@@ -179,11 +398,7 @@ def test_FTL_detect_aarch64_no_errors(Pihole):
|
|||||||
)
|
)
|
||||||
detectPlatform = Pihole.run('''
|
detectPlatform = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
''')
|
''')
|
||||||
expected_stdout = info_box + ' FTL Checks...'
|
expected_stdout = info_box + ' FTL Checks...'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -203,11 +418,7 @@ def test_FTL_detect_armv6l_no_errors(Pihole):
|
|||||||
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
|
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
|
||||||
detectPlatform = Pihole.run('''
|
detectPlatform = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
''')
|
''')
|
||||||
expected_stdout = info_box + ' FTL Checks...'
|
expected_stdout = info_box + ' FTL Checks...'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -228,11 +439,7 @@ def test_FTL_detect_armv7l_no_errors(Pihole):
|
|||||||
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
|
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
|
||||||
detectPlatform = Pihole.run('''
|
detectPlatform = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
''')
|
''')
|
||||||
expected_stdout = info_box + ' FTL Checks...'
|
expected_stdout = info_box + ' FTL Checks...'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -248,11 +455,7 @@ def test_FTL_detect_x86_64_no_errors(Pihole):
|
|||||||
'''
|
'''
|
||||||
detectPlatform = Pihole.run('''
|
detectPlatform = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
''')
|
''')
|
||||||
expected_stdout = info_box + ' FTL Checks...'
|
expected_stdout = info_box + ' FTL Checks...'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -268,11 +471,7 @@ def test_FTL_detect_unknown_no_errors(Pihole):
|
|||||||
mock_command('uname', {'-m': ('mips', '0')}, Pihole)
|
mock_command('uname', {'-m': ('mips', '0')}, Pihole)
|
||||||
detectPlatform = Pihole.run('''
|
detectPlatform = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
''')
|
''')
|
||||||
expected_stdout = 'Not able to detect architecture (unknown: mips)'
|
expected_stdout = 'Not able to detect architecture (unknown: mips)'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -282,40 +481,79 @@ def test_FTL_download_aarch64_no_errors(Pihole):
|
|||||||
'''
|
'''
|
||||||
confirms only aarch64 package is downloaded for FTL engine
|
confirms only aarch64 package is downloaded for FTL engine
|
||||||
'''
|
'''
|
||||||
# mock whiptail answers and ensure installer dependencies
|
|
||||||
mock_command('whiptail', {'*': ('', '0')}, Pihole)
|
|
||||||
Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
distro_check
|
|
||||||
install_dependent_packages ${INSTALLER_DEPS[@]}
|
|
||||||
''')
|
|
||||||
download_binary = Pihole.run('''
|
download_binary = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
binary="pihole-FTL-aarch64-linux-gnu"
|
||||||
FTLinstall "pihole-FTL-aarch64-linux-gnu"
|
FTLinstall
|
||||||
''')
|
''')
|
||||||
expected_stdout = tick_box + ' Downloading and Installing FTL'
|
expected_stdout = tick_box + ' Downloading and Installing FTL'
|
||||||
assert expected_stdout in download_binary.stdout
|
assert expected_stdout in download_binary.stdout
|
||||||
assert 'error' not in download_binary.stdout.lower()
|
assert 'error' not in download_binary.stdout.lower()
|
||||||
|
|
||||||
|
|
||||||
|
def test_FTL_download_unknown_fails_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms unknown binary is not downloaded for FTL engine
|
||||||
|
'''
|
||||||
|
download_binary = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
binary="pihole-FTL-mips"
|
||||||
|
FTLinstall
|
||||||
|
''')
|
||||||
|
expected_stdout = cross_box + ' Downloading and Installing FTL'
|
||||||
|
assert expected_stdout in download_binary.stdout
|
||||||
|
error1 = 'Error: URL https://github.com/pi-hole/FTL/releases/download/'
|
||||||
|
assert error1 in download_binary.stdout
|
||||||
|
error2 = 'not found'
|
||||||
|
assert error2 in download_binary.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_FTL_download_binary_unset_no_errors(Pihole):
|
||||||
|
'''
|
||||||
|
confirms unset binary variable does not download FTL engine
|
||||||
|
'''
|
||||||
|
download_binary = Pihole.run('''
|
||||||
|
source /opt/pihole/basic-install.sh
|
||||||
|
FTLinstall
|
||||||
|
''')
|
||||||
|
expected_stdout = cross_box + ' Downloading and Installing FTL'
|
||||||
|
assert expected_stdout in download_binary.stdout
|
||||||
|
error1 = 'Error: URL https://github.com/pi-hole/FTL/releases/download/'
|
||||||
|
assert error1 in download_binary.stdout
|
||||||
|
error2 = 'not found'
|
||||||
|
assert error2 in download_binary.stdout
|
||||||
|
|
||||||
|
|
||||||
def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
|
def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
|
||||||
'''
|
'''
|
||||||
confirms FTL binary is copied and functional in installed location
|
confirms FTL binary is copied and functional in installed location
|
||||||
'''
|
'''
|
||||||
installed_binary = Pihole.run('''
|
installed_binary = Pihole.run('''
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
FTLdetect
|
||||||
funcOutput=$(get_binary_name)
|
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
|
||||||
FTLdetect "${binary}" "${theRest}"
|
|
||||||
pihole-FTL version
|
pihole-FTL version
|
||||||
''')
|
''')
|
||||||
expected_stdout = 'v'
|
expected_stdout = 'v'
|
||||||
assert expected_stdout in installed_binary.stdout
|
assert expected_stdout in installed_binary.stdout
|
||||||
|
|
||||||
|
|
||||||
|
# def test_FTL_support_files_installed(Pihole):
|
||||||
|
# '''
|
||||||
|
# confirms FTL support files are installed
|
||||||
|
# '''
|
||||||
|
# support_files = Pihole.run('''
|
||||||
|
# source /opt/pihole/basic-install.sh
|
||||||
|
# FTLdetect
|
||||||
|
# stat -c '%a %n' /var/log/pihole-FTL.log
|
||||||
|
# stat -c '%a %n' /run/pihole-FTL.port
|
||||||
|
# stat -c '%a %n' /run/pihole-FTL.pid
|
||||||
|
# ls -lac /run
|
||||||
|
# ''')
|
||||||
|
# assert '644 /run/pihole-FTL.port' in support_files.stdout
|
||||||
|
# assert '644 /run/pihole-FTL.pid' in support_files.stdout
|
||||||
|
# assert '644 /var/log/pihole-FTL.log' in support_files.stdout
|
||||||
|
|
||||||
|
|
||||||
def test_IPv6_only_link_local(Pihole):
|
def test_IPv6_only_link_local(Pihole):
|
||||||
'''
|
'''
|
||||||
confirms IPv6 blocking is disabled for Link-local address
|
confirms IPv6 blocking is disabled for Link-local address
|
||||||
@@ -432,42 +670,3 @@ def test_IPv6_ULA_GUA_test(Pihole):
|
|||||||
''')
|
''')
|
||||||
expected_stdout = 'Found IPv6 ULA address, using it for blocking IPv6 ads'
|
expected_stdout = 'Found IPv6 ULA address, using it for blocking IPv6 ads'
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
|
|
||||||
def test_validate_ip_valid(Pihole):
|
|
||||||
'''
|
|
||||||
Given a valid IP address, valid_ip returns success
|
|
||||||
'''
|
|
||||||
|
|
||||||
output = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
valid_ip "192.168.1.1"
|
|
||||||
''')
|
|
||||||
|
|
||||||
assert output.rc == 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_ip_invalid_octet(Pihole):
|
|
||||||
'''
|
|
||||||
Given an invalid IP address (large octet), valid_ip returns an error
|
|
||||||
'''
|
|
||||||
|
|
||||||
output = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
valid_ip "1092.168.1.1"
|
|
||||||
''')
|
|
||||||
|
|
||||||
assert output.rc == 1
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_ip_invalid_letters(Pihole):
|
|
||||||
'''
|
|
||||||
Given an invalid IP address (contains letters), valid_ip returns an error
|
|
||||||
'''
|
|
||||||
|
|
||||||
output = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
valid_ip "not an IP"
|
|
||||||
''')
|
|
||||||
|
|
||||||
assert output.rc == 1
|
|
||||||
|
@@ -1,75 +1,13 @@
|
|||||||
import pytest
|
import pytest
|
||||||
from .conftest import (
|
from conftest import (
|
||||||
tick_box,
|
tick_box,
|
||||||
info_box,
|
info_box,
|
||||||
cross_box,
|
cross_box,
|
||||||
mock_command,
|
mock_command,
|
||||||
|
mock_command_2,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def mock_selinux_config(state, Pihole):
|
|
||||||
'''
|
|
||||||
Creates a mock SELinux config file with expected content
|
|
||||||
'''
|
|
||||||
# validate state string
|
|
||||||
valid_states = ['enforcing', 'permissive', 'disabled']
|
|
||||||
assert state in valid_states
|
|
||||||
# getenforce returns the running state of SELinux
|
|
||||||
mock_command('getenforce', {'*': (state.capitalize(), '0')}, Pihole)
|
|
||||||
# create mock configuration with desired content
|
|
||||||
Pihole.run('''
|
|
||||||
mkdir /etc/selinux
|
|
||||||
echo "SELINUX={state}" > /etc/selinux/config
|
|
||||||
'''.format(state=state.lower()))
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
|
|
||||||
def test_selinux_enforcing_exit(Pihole):
|
|
||||||
'''
|
|
||||||
confirms installer prompts to exit when SELinux is Enforcing by default
|
|
||||||
'''
|
|
||||||
mock_selinux_config("enforcing", Pihole)
|
|
||||||
check_selinux = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
checkSelinux
|
|
||||||
''')
|
|
||||||
expected_stdout = cross_box + ' Current SELinux: Enforcing'
|
|
||||||
assert expected_stdout in check_selinux.stdout
|
|
||||||
expected_stdout = 'SELinux Enforcing detected, exiting installer'
|
|
||||||
assert expected_stdout in check_selinux.stdout
|
|
||||||
assert check_selinux.rc == 1
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
|
|
||||||
def test_selinux_permissive(Pihole):
|
|
||||||
'''
|
|
||||||
confirms installer continues when SELinux is Permissive
|
|
||||||
'''
|
|
||||||
mock_selinux_config("permissive", Pihole)
|
|
||||||
check_selinux = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
checkSelinux
|
|
||||||
''')
|
|
||||||
expected_stdout = tick_box + ' Current SELinux: Permissive'
|
|
||||||
assert expected_stdout in check_selinux.stdout
|
|
||||||
assert check_selinux.rc == 0
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("tag", [('centos'), ('fedora'), ])
|
|
||||||
def test_selinux_disabled(Pihole):
|
|
||||||
'''
|
|
||||||
confirms installer continues when SELinux is Disabled
|
|
||||||
'''
|
|
||||||
mock_selinux_config("disabled", Pihole)
|
|
||||||
check_selinux = Pihole.run('''
|
|
||||||
source /opt/pihole/basic-install.sh
|
|
||||||
checkSelinux
|
|
||||||
''')
|
|
||||||
expected_stdout = tick_box + ' Current SELinux: Disabled'
|
|
||||||
assert expected_stdout in check_selinux.stdout
|
|
||||||
assert check_selinux.rc == 0
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("tag", [('fedora'), ])
|
@pytest.mark.parametrize("tag", [('fedora'), ])
|
||||||
def test_epel_and_remi_not_installed_fedora(Pihole):
|
def test_epel_and_remi_not_installed_fedora(Pihole):
|
||||||
'''
|
'''
|
||||||
|
@@ -14,5 +14,5 @@ def test_scripts_pass_shellcheck():
|
|||||||
"shellcheck -x \"$file\" -e SC1090,SC1091; "
|
"shellcheck -x \"$file\" -e SC1090,SC1091; "
|
||||||
"done;")
|
"done;")
|
||||||
results = run_local(shellcheck)
|
results = run_local(shellcheck)
|
||||||
print(results.stdout)
|
print results.stdout
|
||||||
assert '' == results.stdout
|
assert '' == results.stdout
|
||||||
|
Reference in New Issue
Block a user