Compare commits

..

3 Commits

Author SHA1 Message Date
Adam Warner
f27efec3e6 X-Pi-hole removed from blocking page...
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2021-10-06 23:31:49 +01:00
Adam Warner
2f50da6ea5 no need to declare $viewPort
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2021-10-06 23:31:48 +01:00
Adam Warner
603caa9239 Dependabot config tweak
Signed-off-by: Adam Warner <me@adamwarner.co.uk>
2021-10-06 23:31:48 +01:00
60 changed files with 1886 additions and 2751 deletions

7
.github/release.yml vendored
View File

@@ -1,7 +0,0 @@
changelog:
exclude:
labels:
- internal
authors:
- dependabot
- github-actions

View File

@@ -1,25 +0,0 @@
name: Mark stale issues
on:
schedule:
- cron: '0 * * * *'
workflow_dispatch:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: actions/stale@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30
days-before-close: 5
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Please comment or update this issue or it will be closed in 5 days.'
stale-issue-label: 'stale'
exempt-issue-labels: 'Internal, Fixed in next release, Bug: Confirmed, Documentation Needed'
exempt-all-issue-assignees: true
operations-per-run: 300

View File

@@ -1,27 +0,0 @@
name: Sync Back to Development
on:
push:
branches:
- master
jobs:
sync-branches:
runs-on: ubuntu-latest
name: Syncing branches
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Opening pull request
id: pull
uses: tretuna/sync-branches@1.4.0
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FROM_BRANCH: 'master'
TO_BRANCH: 'development'
- name: Label the pull request to ignore for release note generation
uses: actions-ecosystem/action-add-labels@v1
with:
labels: internal
repo: ${{ github.repository }}
number: ${{ steps.pull.outputs.PULL_REQUEST_NUMBER }}

View File

@@ -28,7 +28,7 @@ jobs:
needs: smoke-test
strategy:
matrix:
distro: [debian_9, debian_10, debian_11, ubuntu_16, ubuntu_18, ubuntu_20, ubuntu_21, centos_7, centos_8, fedora_33, fedora_34]
distro: [debian_9, debian_10, debian_11, ubuntu_16, ubuntu_18, ubuntu_20, ubuntu_21, centos_7, centos_8, fedora_32, fedora_33]
env:
DISTRO: ${{matrix.distro}}
steps:
@@ -36,10 +36,10 @@ jobs:
name: Checkout repository
uses: actions/checkout@v2
-
name: Set up Python 3.8
name: Set up Python 3.7
uses: actions/setup-python@v2
with:
python-version: 3.8
python-version: 3.7
-
name: Install dependencies
run: pip install -r test/requirements.txt

1
.gitignore vendored
View File

@@ -9,4 +9,3 @@ __pycache__
*.egg-info
.idea/
*.iml
.vscode/

View File

@@ -2,6 +2,111 @@
Please read and understand the contribution guide before creating an issue or pull request.
The guide can be found here: [https://docs.pi-hole.net/guides/github/contributing/](https://docs.pi-hole.net/guides/github/contributing/)
## Etiquette
- Our goal for Pi-hole is **stability before features**. This means we focus on squashing critical bugs before adding new features. Often, we can do both in tandem, but bugs will take priority over a new feature.
- Pi-hole is open source and [powered by donations](https://pi-hole.net/donate/), and as such, we give our **free time** to build, maintain, and **provide user support** for this project. It would be extremely unfair for us to suffer abuse or anger for our hard work, so please take a moment to consider that.
- Please be considerate towards the developers and other users when raising issues or presenting pull requests.
- Respect our decision(s), and do not be upset or abusive if your submission is not used.
## Viability
When requesting or submitting new features, first consider whether it might be useful to others. Open source projects are used by many people, who may have entirely different needs to your own. Think about whether or not your feature is likely to be used by other users of the project.
## Procedure
**Before filing an issue:**
- Attempt to replicate and **document** the problem, to ensure that it wasn't a coincidental incident.
- Check to make sure your feature suggestion isn't already present within the project.
- Check the pull requests tab to ensure that the bug doesn't have a fix in progress.
- Check the pull requests tab to ensure that the feature isn't already in progress.
**Before submitting a pull request:**
- Check the codebase to ensure that your feature doesn't already exist.
- Check the pull requests to ensure that another person hasn't already submitted the feature or fix.
- Read and understand the [DCO guidelines](https://docs.pi-hole.net/guides/github/contributing/) for the project.
## Technical Requirements
- Submit Pull Requests to the **development branch only**.
- Before Submitting your Pull Request, merge `development` with your new branch and fix any conflicts. (Make sure you don't break anything in development!)
- Please use the [Google Style Guide for Shell](https://google.github.io/styleguide/shell.xml) for your code submission styles.
- Commit Unix line endings.
- Please use the Pi-hole brand: **Pi-hole** (Take a special look at the capitalized 'P' and a low 'h' with a hyphen)
- (Optional fun) keep to the theme of Star Trek/black holes/gravity.
## Forking and Cloning from GitHub to GitHub
1. Fork <https://github.com/pi-hole/pi-hole/> to a repo under a namespace you control, or have permission to use, for example: `https://github.com/<your_namespace>/<your_repo_name>/`. You can do this from the github.com website.
2. Clone `https://github.com/<your_namespace>/<your_repo_name>/` with the tool of you choice.
3. To keep your fork in sync with our repo, add an upstream remote for pi-hole/pi-hole to your repo.
```bash
git remote add upstream https://github.com/pi-hole/pi-hole.git
```
4. Checkout the `development` branch from your fork `https://github.com/<your_namespace>/<your_repo_name>/`.
5. Create a topic/branch, based on the `development` branch code. *Bonus fun to keep to the theme of Star Trek/black holes/gravity.*
6. Make your changes and commit to your topic branch in your repo.
7. Rebase your commits and squash any insignificant commits. See the notes below for an example.
8. Merge `development` your branch and fix any conflicts.
9. Open a Pull Request to merge your topic branch into our repo's `development` branch.
- Keep in mind the technical requirements from above.
## Forking and Cloning from GitHub to other code hosting sites
- Forking is a GitHub concept and cannot be done from GitHub to other git-based code hosting sites. However, those sites may be able to mirror a GitHub repo.
1. To contribute from another code hosting site, you must first complete the steps above to fork our repo to a GitHub namespace you have permission to use, for example: `https://github.com/<your_namespace>/<your_repo_name>/`.
2. Create a repo in your code hosting site, for example: `https://gitlab.com/<your_namespace>/<your_repo_name>/`
3. Follow the instructions from your code hosting site to create a mirror between `https://github.com/<your_namespace>/<your_repo_name>/` and `https://gitlab.com/<your_namespace>/<your_repo_name>/`.
4. When you are ready to create a Pull Request (PR), follow the steps `(starting at step #6)` from [Forking and Cloning from GitHub to GitHub](#forking-and-cloning-from-github-to-github) and create the PR from `https://github.com/<your_namespace>/<your_repo_name>/`.
## Notes for squashing commits with rebase
- To rebase your commits and squash previous commits, you can use:
```bash
git rebase -i your_topic_branch~(number of commits to combine)
```
- For more details visit [gitready.com](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html)
1. The following would combine the last four commits in the branch `mytopic`.
```bash
git rebase -i mytopic~4
```
2. An editor window opens with the most recent commits indicated: (edit the commands to the left of the commit ID)
```gitattributes
pick 9dff55b2 existing commit comments
squash ebb1a730 existing commit comments
squash 07cc5b50 existing commit comments
reword 9dff55b2 existing commit comments
```
3. Save and close the editor. The next editor window opens: (edit the new commit message). *If you select reword for a commit, an additional editor window will open for you to edit the comment.*
```bash
new commit comments
Signed-off-by: yourname <your email address>
```
4. Save and close the editor for the rebase process to execute. The terminal output should say something like the following:
```bash
Successfully rebased and updated refs/heads/mytopic.
```
5. Once you have a successful rebase, and before you sync your local clone, you have to force push origin to update your repo:
```bash
git push -f origin
```
6. Continue on from step #7 from [Forking and Cloning from GitHub to GitHub](#forking-and-cloning-from-github-to-github)

View File

@@ -25,12 +25,11 @@ server=/localhost/
server=/invalid/
# The same RFC requests something similar for
# 10.in-addr.arpa. 21.172.in-addr.arpa. 27.172.in-addr.arpa.
# 16.172.in-addr.arpa. 22.172.in-addr.arpa. 28.172.in-addr.arpa.
# 17.172.in-addr.arpa. 23.172.in-addr.arpa. 29.172.in-addr.arpa.
# 18.172.in-addr.arpa. 24.172.in-addr.arpa. 30.172.in-addr.arpa.
# 19.172.in-addr.arpa. 25.172.in-addr.arpa. 31.172.in-addr.arpa.
# 20.172.in-addr.arpa. 26.172.in-addr.arpa. 168.192.in-addr.arpa.
# 16.172.in-addr.arpa. 22.172.in-addr.arpa. 27.172.in-addr.arpa.
# 17.172.in-addr.arpa. 30.172.in-addr.arpa. 28.172.in-addr.arpa.
# 18.172.in-addr.arpa. 23.172.in-addr.arpa. 29.172.in-addr.arpa.
# 19.172.in-addr.arpa. 24.172.in-addr.arpa. 31.172.in-addr.arpa.
# 20.172.in-addr.arpa. 25.172.in-addr.arpa. 168.192.in-addr.arpa.
# Pi-hole implements this via the dnsmasq option "bogus-priv" (see
# 01-pihole.conf) because this also covers IPv6.

View File

@@ -357,7 +357,7 @@ get_sys_stats() {
ram_used="${ram_raw[1]}"
ram_total="${ram_raw[2]}"
if [[ "$(pihole status web 2> /dev/null)" -ge "1" ]]; then
if [[ "$(pihole status web 2> /dev/null)" == "1" ]]; then
ph_status="${COL_LIGHT_GREEN}Active"
else
ph_status="${COL_LIGHT_RED}Offline"

View File

@@ -19,13 +19,13 @@ upgrade_gravityDB(){
auditFile="${piholeDir}/auditlog.list"
# Get database version
version="$(pihole-FTL sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")"
version="$(sqlite3 "${database}" "SELECT \"value\" FROM \"info\" WHERE \"property\" = 'version';")"
if [[ "$version" == "1" ]]; then
# This migration script upgrades the gravity.db file by
# adding the domain_audit table
echo -e " ${INFO} Upgrading gravity database from version 1 to 2"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/1_to_2.sql"
sqlite3 "${database}" < "${scriptPath}/1_to_2.sql"
version=2
# Store audit domains in database table
@@ -40,28 +40,28 @@ upgrade_gravityDB(){
# renaming the regex table to regex_blacklist, and
# creating a new regex_whitelist table + corresponding linking table and views
echo -e " ${INFO} Upgrading gravity database from version 2 to 3"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/2_to_3.sql"
sqlite3 "${database}" < "${scriptPath}/2_to_3.sql"
version=3
fi
if [[ "$version" == "3" ]]; then
# This migration script unifies the formally separated domain
# lists into a single table with a UNIQUE domain constraint
echo -e " ${INFO} Upgrading gravity database from version 3 to 4"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/3_to_4.sql"
sqlite3 "${database}" < "${scriptPath}/3_to_4.sql"
version=4
fi
if [[ "$version" == "4" ]]; then
# This migration script upgrades the gravity and list views
# implementing necessary changes for per-client blocking
echo -e " ${INFO} Upgrading gravity database from version 4 to 5"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/4_to_5.sql"
sqlite3 "${database}" < "${scriptPath}/4_to_5.sql"
version=5
fi
if [[ "$version" == "5" ]]; then
# This migration script upgrades the adlist view
# to return an ID used in gravity.sh
echo -e " ${INFO} Upgrading gravity database from version 5 to 6"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/5_to_6.sql"
sqlite3 "${database}" < "${scriptPath}/5_to_6.sql"
version=6
fi
if [[ "$version" == "6" ]]; then
@@ -69,7 +69,7 @@ upgrade_gravityDB(){
# which is automatically associated to all clients not
# having their own group assignments
echo -e " ${INFO} Upgrading gravity database from version 6 to 7"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/6_to_7.sql"
sqlite3 "${database}" < "${scriptPath}/6_to_7.sql"
version=7
fi
if [[ "$version" == "7" ]]; then
@@ -77,21 +77,21 @@ upgrade_gravityDB(){
# to ensure uniqueness on the group name
# We also add date_added and date_modified columns
echo -e " ${INFO} Upgrading gravity database from version 7 to 8"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/7_to_8.sql"
sqlite3 "${database}" < "${scriptPath}/7_to_8.sql"
version=8
fi
if [[ "$version" == "8" ]]; then
# This migration fixes some issues that were introduced
# in the previous migration script.
echo -e " ${INFO} Upgrading gravity database from version 8 to 9"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/8_to_9.sql"
sqlite3 "${database}" < "${scriptPath}/8_to_9.sql"
version=9
fi
if [[ "$version" == "9" ]]; then
# This migration drops unused tables and creates triggers to remove
# obsolete groups assignments when the linked items are deleted
echo -e " ${INFO} Upgrading gravity database from version 9 to 10"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/9_to_10.sql"
sqlite3 "${database}" < "${scriptPath}/9_to_10.sql"
version=10
fi
if [[ "$version" == "10" ]]; then
@@ -101,31 +101,25 @@ upgrade_gravityDB(){
# to keep the copying process generic (needs the same columns in both the
# source and the destination databases).
echo -e " ${INFO} Upgrading gravity database from version 10 to 11"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/10_to_11.sql"
sqlite3 "${database}" < "${scriptPath}/10_to_11.sql"
version=11
fi
if [[ "$version" == "11" ]]; then
# Rename group 0 from "Unassociated" to "Default"
echo -e " ${INFO} Upgrading gravity database from version 11 to 12"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/11_to_12.sql"
sqlite3 "${database}" < "${scriptPath}/11_to_12.sql"
version=12
fi
if [[ "$version" == "12" ]]; then
# Add column date_updated to adlist table
echo -e " ${INFO} Upgrading gravity database from version 12 to 13"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/12_to_13.sql"
sqlite3 "${database}" < "${scriptPath}/12_to_13.sql"
version=13
fi
if [[ "$version" == "13" ]]; then
# Add columns number and status to adlist table
echo -e " ${INFO} Upgrading gravity database from version 13 to 14"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/13_to_14.sql"
sqlite3 "${database}" < "${scriptPath}/13_to_14.sql"
version=14
fi
if [[ "$version" == "14" ]]; then
# Changes the vw_adlist created in 5_to_6
echo -e " ${INFO} Upgrading gravity database from version 14 to 15"
pihole-FTL sqlite3 "${database}" < "${scriptPath}/14_to_15.sql"
version=15
fi
}

View File

@@ -1,15 +0,0 @@
.timeout 30000
PRAGMA FOREIGN_KEYS=OFF;
BEGIN TRANSACTION;
DROP VIEW vw_adlist;
CREATE VIEW vw_adlist AS SELECT DISTINCT address, id
FROM adlist
WHERE enabled = 1
ORDER BY id;
UPDATE info SET value = 15 WHERE property = 'version';
COMMIT;

View File

@@ -91,8 +91,7 @@ Options:
-q, --quiet Make output less verbose
-h, --help Show this help dialog
-l, --list Display all your ${listname}listed domains
--nuke Removes all entries in a list
--comment \"text\" Add a comment to the domain. If adding multiple domains the same comment will be used for all"
--nuke Removes all entries in a list"
exit 0
}
@@ -142,18 +141,18 @@ AddDomain() {
domain="$1"
# Is the domain in the list we want to add it to?
num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")"
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}';")"
requestedListname="$(GetListnameFromTypeId "${typeId}")"
if [[ "${num}" -ne 0 ]]; then
existingTypeId="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")"
existingTypeId="$(sqlite3 "${gravityDBfile}" "SELECT type FROM domainlist WHERE domain = '${domain}';")"
if [[ "${existingTypeId}" == "${typeId}" ]]; then
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${requestedListname}, no need to add!"
fi
else
existingListname="$(GetListnameFromTypeId "${existingTypeId}")"
pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
sqlite3 "${gravityDBfile}" "UPDATE domainlist SET type = ${typeId} WHERE domain='${domain}';"
if [[ "${verbose}" == true ]]; then
echo -e " ${INFO} ${1} already exists in ${existingListname}, it has been moved to ${requestedListname}!"
fi
@@ -169,10 +168,10 @@ AddDomain() {
# Insert only the domain here. The enabled and date_added fields will be filled
# with their default values (enabled = true, date_added = current timestamp)
if [[ -z "${comment}" ]]; then
pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});"
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type) VALUES ('${domain}',${typeId});"
else
# also add comment when variable has been set through the "--comment" option
pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');"
sqlite3 "${gravityDBfile}" "INSERT INTO domainlist (domain,type,comment) VALUES ('${domain}',${typeId},'${comment}');"
fi
}
@@ -181,7 +180,7 @@ RemoveDomain() {
domain="$1"
# Is the domain in the list we want to remove it from?
num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")"
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};")"
requestedListname="$(GetListnameFromTypeId "${typeId}")"
@@ -198,14 +197,14 @@ RemoveDomain() {
fi
reload=true
# Remove it from the current list
pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};"
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE domain = '${domain}' AND type = ${typeId};"
}
Displaylist() {
local count num_pipes domain enabled status nicedate requestedListname
requestedListname="$(GetListnameFromTypeId "${typeId}")"
data="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)"
data="$(sqlite3 "${gravityDBfile}" "SELECT domain,enabled,date_modified FROM domainlist WHERE type = ${typeId};" 2> /dev/null)"
if [[ -z $data ]]; then
echo -e "Not showing empty list"
@@ -243,10 +242,10 @@ Displaylist() {
}
NukeList() {
count=$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(1) FROM domainlist WHERE type = ${typeId};")
count=$(sqlite3 "${gravityDBfile}" "SELECT COUNT(1) FROM domainlist WHERE type = ${typeId};")
listname="$(GetListnameFromTypeId "${typeId}")"
if [ "$count" -gt 0 ];then
pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};"
sqlite3 "${gravityDBfile}" "DELETE FROM domainlist WHERE type = ${typeId};"
echo " ${TICK} Removed ${count} domain(s) from the ${listname}"
else
echo " ${INFO} ${listname} already empty. Nothing to do!"
@@ -293,7 +292,7 @@ ProcessDomainList
# Used on web interface
if $web; then
echo "DONE"
echo "DONE"
fi
if [[ ${reload} == true && ${noReloadRequested} == false ]]; then

View File

@@ -39,7 +39,7 @@ flushARP(){
# Truncate network_addresses table in pihole-FTL.db
# This needs to be done before we can truncate the network table due to
# foreign key constraints
if ! output=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then
echo -e "${OVER} ${CROSS} Failed to truncate network_addresses table"
echo " Database location: ${DBFILE}"
echo " Output: ${output}"
@@ -47,7 +47,7 @@ flushARP(){
fi
# Truncate network table in pihole-FTL.db
if ! output=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM network" 2>&1); then
if ! output=$(sqlite3 "${DBFILE}" "DELETE FROM network" 2>&1); then
echo -e "${OVER} ${CROSS} Failed to truncate network table"
echo " Database location: ${DBFILE}"
echo " Output: ${output}"

View File

@@ -88,7 +88,6 @@ PIHOLE_LOCAL_HOSTS_FILE="${PIHOLE_DIRECTORY}/local.list"
PIHOLE_LOGROTATE_FILE="${PIHOLE_DIRECTORY}/logrotate"
PIHOLE_SETUP_VARS_FILE="${PIHOLE_DIRECTORY}/setupVars.conf"
PIHOLE_FTL_CONF_FILE="${PIHOLE_DIRECTORY}/pihole-FTL.conf"
PIHOLE_CUSTOM_HOSTS_FILE="${PIHOLE_DIRECTORY}/custom.list"
# Read the value of an FTL config key. The value is printed to stdout.
#
@@ -180,8 +179,7 @@ REQUIRED_FILES=("${PIHOLE_CRON_FILE}"
"${PIHOLE_WEB_SERVER_ACCESS_LOG_FILE}"
"${PIHOLE_WEB_SERVER_ERROR_LOG_FILE}"
"${RESOLVCONF}"
"${DNSMASQ_CONF}"
"${PIHOLE_CUSTOM_HOSTS_FILE}")
"${DNSMASQ_CONF}")
DISCLAIMER="This process collects information from your Pi-hole, and optionally uploads it to a unique and random directory on tricorder.pi-hole.net.
@@ -467,9 +465,6 @@ diagnose_operating_system() {
# Display the current test that is running
echo_current_diagnostic "Operating system"
# If the PIHOLE_DOCKER_TAG variable is set, include this information in the debug output
[ -n "${PIHOLE_DOCKER_TAG}" ] && log_write "${INFO} Pi-hole Docker Container: ${PIHOLE_DOCKER_TAG}"
# If there is a /etc/*release file, it's probably a supported operating system, so we can
if ls /etc/*release 1> /dev/null 2>&1; then
# display the attributes to the user from the function made earlier
@@ -590,27 +585,6 @@ processor_check() {
fi
}
disk_usage() {
local file_system
local hide
echo_current_diagnostic "Disk usage"
mapfile -t file_system < <(df -h)
# Some lines of df might contain sensitive information like usernames and passwords.
# E.g. curlftpfs filesystems (https://www.looklinux.com/mount-ftp-share-on-linux-using-curlftps/)
# We are not interested in those lines so we collect keyword, to remove them from the output
# Additinal keywords can be added, separated by "|"
hide="curlftpfs"
# only show those lines not containg a sensitive phrase
for line in "${file_system[@]}"; do
if [[ ! $line =~ $hide ]]; then
log_write " ${line}"
fi
done
}
parse_setup_vars() {
echo_current_diagnostic "Setup variables"
# If the file exists,
@@ -733,11 +707,11 @@ compare_port_to_service_assigned() {
# If the service is a Pi-hole service, highlight it in green
if [[ "${service_name}" == "${expected_service}" ]]; then
log_write "${TICK} ${COL_GREEN}${port}${COL_NC} is in use by ${COL_GREEN}${service_name}${COL_NC}"
log_write "[${COL_GREEN}${port}${COL_NC}] is in use by ${COL_GREEN}${service_name}${COL_NC}"
# Otherwise,
else
# Show the service name in red since it's non-standard
log_write "${CROSS} ${COL_RED}${port}${COL_NC} is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
log_write "[${COL_RED}${port}${COL_NC}] is in use by ${COL_RED}${service_name}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_PORTS})"
fi
}
@@ -753,47 +727,36 @@ check_required_ports() {
# Sort the addresses and remove duplicates
while IFS= read -r line; do
ports_in_use+=( "$line" )
done < <( ss --listening --numeric --tcp --udp --processes --no-header )
done < <( lsof -iTCP -sTCP:LISTEN -P -n +c 10 )
# Now that we have the values stored,
for i in "${!ports_in_use[@]}"; do
# loop through them and assign some local variables
local service_name
service_name=$(echo "${ports_in_use[$i]}" | awk '{gsub(/users:\(\("/,"",$7);gsub(/".*/,"",$7);print $7}')
service_name=$(echo "${ports_in_use[$i]}" | awk '{print $1}')
local protocol_type
protocol_type=$(echo "${ports_in_use[$i]}" | awk '{print $1}')
protocol_type=$(echo "${ports_in_use[$i]}" | awk '{print $5}')
local port_number
port_number="$(echo "${ports_in_use[$i]}" | awk '{print $5}')" # | awk '{gsub(/^.*:/,"",$5);print $5}')
port_number="$(echo "${ports_in_use[$i]}" | awk '{print $9}')"
# Skip the line if it's the titles of the columns the lsof command produces
if [[ "${service_name}" == COMMAND ]]; then
continue
fi
# Use a case statement to determine if the right services are using the right ports
case "$(echo "${port_number}" | rev | cut -d: -f1 | rev)" in
53) compare_port_to_service_assigned "${resolver}" "${service_name}" "${protocol_type}:${port_number}"
case "$(echo "$port_number" | rev | cut -d: -f1 | rev)" in
53) compare_port_to_service_assigned "${resolver}" "${service_name}" 53
;;
80) compare_port_to_service_assigned "${web_server}" "${service_name}" "${protocol_type}:${port_number}"
80) compare_port_to_service_assigned "${web_server}" "${service_name}" 80
;;
4711) compare_port_to_service_assigned "${ftl}" "${service_name}" "${protocol_type}:${port_number}"
4711) compare_port_to_service_assigned "${ftl}" "${service_name}" 4711
;;
# If it's not a default port that Pi-hole needs, just print it out for the user to see
*) log_write " ${protocol_type}:${port_number} is in use by ${service_name:=<unknown>}";
*) log_write "${port_number} ${service_name} (${protocol_type})";
esac
done
}
ip_command() {
# Obtain and log information from "ip XYZ show" commands
echo_current_diagnostic "${2}"
local entries=()
mapfile -t entries < <(ip "${1}" show)
for line in "${entries[@]}"; do
log_write " ${line}"
done
}
check_ip_command() {
ip_command "addr" "Network interfaces and addresses"
ip_command "route" "Network routing table"
}
check_networking() {
# Runs through several of the functions made earlier; we just clump them
# together since they are all related to the networking aspect of things
@@ -802,9 +765,7 @@ check_networking() {
detect_ip_addresses "6"
ping_gateway "4"
ping_gateway "6"
# Skip the following check if installed in docker container. Unpriv'ed containers do not have access to the information required
# to resolve the service name listening - and the container should not start if there was a port conflict anyway
[ -z "${PIHOLE_DOCKER_TAG}" ] && check_required_ports
check_required_ports
}
check_x_headers() {
@@ -816,29 +777,13 @@ check_x_headers() {
# server is operating correctly
echo_current_diagnostic "Dashboard and block page"
# Use curl -I to get the header and parse out just the X-Pi-hole one
local block_page
block_page=$(curl -Is localhost | awk '/X-Pi-hole/' | tr -d '\r')
# Do it for the dashboard as well, as the header is different than above
local dashboard
dashboard=$(curl -Is localhost/admin/ | awk '/X-Pi-hole/' | tr -d '\r')
# Store what the X-Header should be in variables for comparison later
local block_page_working
block_page_working="X-Pi-hole: A black hole for Internet advertisements."
local dashboard_working
dashboard_working="X-Pi-hole: The Pi-hole Web interface is working!"
local full_curl_output_block_page
full_curl_output_block_page="$(curl -Is localhost)"
local full_curl_output_dashboard
full_curl_output_dashboard="$(curl -Is localhost/admin/)"
# If the X-header found by curl matches what is should be,
if [[ $block_page == "$block_page_working" ]]; then
# display a success message
log_write "$TICK Block page X-Header: ${COL_GREEN}${block_page}${COL_NC}"
else
# Otherwise, show an error
log_write "$CROSS Block page X-Header: ${COL_RED}X-Header does not match or could not be retrieved.${COL_NC}"
log_write "${COL_RED}${full_curl_output_block_page}${COL_NC}"
fi
# Same logic applies to the dashboard as above, if the X-Header matches what a working system should have,
if [[ $dashboard == "$dashboard_working" ]]; then
@@ -888,7 +833,7 @@ dig_at() {
# This helps emulate queries to different domains that a user might query
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
local random_url
random_url=$(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity ORDER BY RANDOM() LIMIT 1")
random_url=$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity ORDER BY RANDOM() LIMIT 1")
# Next we need to check if Pi-hole can resolve a domain when the query is sent to it's IP address
# This better emulates how clients will interact with Pi-hole as opposed to above where Pi-hole is
@@ -1202,7 +1147,7 @@ show_db_entries() {
IFS=$'\r\n'
local entries=()
mapfile -t entries < <(\
pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" \
sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" \
-cmd ".headers on" \
-cmd ".mode column" \
-cmd ".width ${widths}" \
@@ -1227,7 +1172,7 @@ show_FTL_db_entries() {
IFS=$'\r\n'
local entries=()
mapfile -t entries < <(\
pihole-FTL sqlite3 "${PIHOLE_FTL_DB_FILE}" \
sqlite3 "${PIHOLE_FTL_DB_FILE}" \
-cmd ".headers on" \
-cmd ".mode column" \
-cmd ".width ${widths}" \
@@ -1284,7 +1229,7 @@ analyze_gravity_list() {
log_write "${COL_GREEN}${gravity_permissions}${COL_NC}"
show_db_entries "Info table" "SELECT property,value FROM info" "20 40"
gravity_updated_raw="$(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")"
gravity_updated_raw="$(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT value FROM info where property = 'updated'")"
gravity_updated="$(date -d @"${gravity_updated_raw}")"
log_write " Last gravity run finished at: ${COL_CYAN}${gravity_updated}${COL_NC}"
log_write ""
@@ -1292,7 +1237,7 @@ analyze_gravity_list() {
OLD_IFS="$IFS"
IFS=$'\r\n'
local gravity_sample=()
mapfile -t gravity_sample < <(pihole-FTL sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10")
mapfile -t gravity_sample < <(sqlite3 "${PIHOLE_GRAVITY_DB_FILE}" "SELECT domain FROM vw_gravity LIMIT 10")
log_write " ${COL_CYAN}----- First 10 Gravity Domains -----${COL_NC}"
for line in "${gravity_sample[@]}"; do
@@ -1404,7 +1349,7 @@ upload_to_tricorder() {
# Provide information on what they should do with their token
log_write " * The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only."
# If pihole -d is running automatically
# If pihole -d is running automatically (usually through the dashboard)
if [[ "${AUTOMATED}" ]]; then
# let the user know
log_write "${INFO} Debug script running in automated mode"
@@ -1412,8 +1357,6 @@ upload_to_tricorder() {
curl_to_tricorder
# If we're not running in automated mode,
else
# if not being called from the web interface
if [[ ! "${WEBCALL}" ]]; then
echo ""
# give the user a choice of uploading it or not
# Users can review the log file locally (or the output of the script since they are the same) and try to self-diagnose their problem
@@ -1425,7 +1368,6 @@ upload_to_tricorder() {
*) log_write " * Log will ${COL_GREEN}NOT${COL_NC} be uploaded to tricorder.\\n * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n";exit;
esac
fi
fi
# Check if tricorder.pi-hole.net is reachable and provide token
# along with some additional useful information
if [[ -n "${tricorder_token}" ]]; then
@@ -1444,14 +1386,9 @@ upload_to_tricorder() {
# If no token was generated
else
# Show an error and some help instructions
# Skip this if being called from web interface and autmatic mode was not chosen (users opt-out to upload)
if [[ "${WEBCALL}" ]] && [[ ! "${AUTOMATED}" ]]; then
:
else
log_write "${CROSS} ${COL_RED}There was an error uploading your debug log.${COL_NC}"
log_write " * Please try again or contact the Pi-hole team for assistance."
fi
fi
# Finally, show where the log file is no matter the outcome of the function so users can look at it
log_write " * A local copy of the debug log can be found at: ${COL_CYAN}${PIHOLE_DEBUG_LOG}${COL_NC}\\n"
}
@@ -1468,8 +1405,6 @@ diagnose_operating_system
check_selinux
check_firewalld
processor_check
disk_usage
check_ip_command
check_networking
check_name_resolution
check_dhcp_servers

View File

@@ -63,7 +63,7 @@ else
fi
fi
# Delete most recent 24 hours from FTL's database, leave even older data intact (don't wipe out all history)
deleted=$(pihole-FTL sqlite3 "${DBFILE}" "DELETE FROM queries WHERE timestamp >= strftime('%s','now')-86400; select changes() from queries limit 1")
deleted=$(sqlite3 "${DBFILE}" "DELETE FROM queries WHERE timestamp >= strftime('%s','now')-86400; select changes() from queries limit 1")
# Restart pihole-FTL to force reloading history
sudo pihole restartdns

View File

@@ -121,7 +121,7 @@ scanDatabaseTable() {
fi
# Send prepared query to gravity database
result="$(pihole-FTL sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null
result="$(sqlite3 "${gravityDBfile}" "${querystr}")" 2> /dev/null
if [[ -z "${result}" ]]; then
# Return early when there are no matches in this table
return
@@ -164,7 +164,7 @@ scanRegexDatabaseTable() {
type="${3:-}"
# Query all regex from the corresponding database tables
mapfile -t regexList < <(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null)
mapfile -t regexList < <(sqlite3 "${gravityDBfile}" "SELECT domain FROM domainlist WHERE type = ${type}" 2> /dev/null)
# If we have regexps to process
if [[ "${#regexList[@]}" -ne 0 ]]; then
@@ -233,7 +233,7 @@ for result in "${results[@]}"; do
adlistAddress="${extra/|*/}"
extra="${extra#*|}"
if [[ "${extra}" == "0" ]]; then
extra=" (disabled)"
extra="(disabled)"
else
extra=""
fi
@@ -241,7 +241,7 @@ for result in "${results[@]}"; do
if [[ -n "${blockpage}" ]]; then
echo "0 ${adlistAddress}"
elif [[ -n "${exact}" ]]; then
echo " - ${adlistAddress}${extra}"
echo " - ${adlistAddress} ${extra}"
else
if [[ ! "${adlistAddress}" == "${adlistAddress_prev:-}" ]]; then
count=""
@@ -256,7 +256,7 @@ for result in "${results[@]}"; do
[[ "${count}" -gt "${max_count}" ]] && continue
echo " ${COL_GRAY}Over ${count} results found, skipping rest of file${COL_NC}"
else
echo " ${match}${extra}"
echo " ${match} ${extra}"
fi
fi
done

View File

@@ -35,7 +35,6 @@ source "/opt/pihole/COL_TABLE"
GitCheckUpdateAvail() {
local directory
local curBranch
directory="${1}"
curdir=$PWD
cd "${directory}" || return
@@ -43,15 +42,6 @@ GitCheckUpdateAvail() {
# Fetch latest changes in this repo
git fetch --quiet origin
# Check current branch. If it is master, then check for the latest available tag instead of latest commit.
curBranch=$(git rev-parse --abbrev-ref HEAD)
if [[ "${curBranch}" == "master" ]]; then
# get the latest local tag
LOCAL=$(git describe --abbrev=0 --tags master)
# get the latest tag from remote
REMOTE=$(git describe --abbrev=0 --tags origin/master)
else
# @ alone is a shortcut for HEAD. Older versions of git
# need @{0}
LOCAL="$(git rev-parse "@{0}")"
@@ -64,8 +54,6 @@ GitCheckUpdateAvail() {
# branch.<name>.merge). A missing branchname
# defaults to the current one.
REMOTE="$(git rev-parse "@{upstream}")"
fi
if [[ "${#LOCAL}" == 0 ]]; then
echo -e "\\n ${COL_LIGHT_RED}Error: Local revision could not be obtained, please contact Pi-hole Support"

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
# Pi-hole: A black hole for Internet advertisements
# (c) 2017 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware.
#
# Script to hold utility functions for use in other scripts
#
# This file is copyright under the latest version of the EUPL.
# Please see LICENSE file for your rights under this license.
# Basic Housekeeping rules
# - Functions must be self contained
# - Functions must be added in alphabetical order
# - Functions must be documented
# - New functions must have a test added for them in test/test_any_utils.py
#######################
# Takes three arguments key, value, and file.
# Checks the target file for the existence of the key
# - If it exists, it changes the value
# - If it does not exist, it adds the value
#
# Example usage:
# addOrEditKeyValuePair "BLOCKING_ENABLED" "true" "/etc/pihole/setupVars.conf"
#######################
addOrEditKeyValPair() {
local key="${1}"
local value="${2}"
local file="${3}"
if grep -q "^${key}=" "${file}"; then
sed -i "/^${key}=/c\\${key}=${value}" "${file}"
else
echo "${key}=${value}" >> "${file}"
fi
}

View File

@@ -13,10 +13,6 @@ DEFAULT="-1"
COREGITDIR="/etc/.pihole/"
WEBGITDIR="/var/www/html/admin/"
# Source the setupvars config file
# shellcheck disable=SC1091
source /etc/pihole/setupVars.conf
getLocalVersion() {
# FTL requires a different method
if [[ "$1" == "FTL" ]]; then
@@ -95,11 +91,10 @@ getRemoteVersion(){
#If the above file exists, then we can read from that. Prevents overuse of GitHub API
if [[ -f "$cachedVersions" ]]; then
IFS=' ' read -r -a arrCache < "$cachedVersions"
case $daemon in
"pi-hole" ) echo "${arrCache[0]}";;
"AdminLTE" ) [[ "${INSTALL_WEB_INTERFACE}" == true ]] && echo "${arrCache[1]}";;
"FTL" ) [[ "${INSTALL_WEB_INTERFACE}" == true ]] && echo "${arrCache[2]}" || echo "${arrCache[1]}";;
"AdminLTE" ) echo "${arrCache[1]}";;
"FTL" ) echo "${arrCache[2]}";;
esac
return 0
@@ -145,11 +140,6 @@ getLocalBranch(){
}
versionOutput() {
if [[ "$1" == "AdminLTE" && "${INSTALL_WEB_INTERFACE}" != true ]]; then
echo " WebAdmin not installed"
return 1
fi
[[ "$1" == "pi-hole" ]] && GITDIR=$COREGITDIR
[[ "$1" == "AdminLTE" ]] && GITDIR=$WEBGITDIR
[[ "$1" == "FTL" ]] && GITDIR="FTL"
@@ -176,7 +166,6 @@ versionOutput() {
output="Latest ${1^} hash is $latHash"
else
errorOutput
return 1
fi
[[ -n "$output" ]] && echo " $output"
@@ -188,6 +177,10 @@ errorOutput() {
}
defaultOutput() {
# Source the setupvars config file
# shellcheck disable=SC1091
source /etc/pihole/setupVars.conf
versionOutput "pi-hole" "$@"
if [[ "${INSTALL_WEB_INTERFACE}" == true ]]; then

View File

@@ -45,8 +45,7 @@ Options:
-h, --help Show this help dialog
-i, interface Specify dnsmasq's interface listening behavior
-l, privacylevel Set privacy level (0 = lowest, 3 = highest)
-t, teleporter Backup configuration as an archive
-t, teleporter myname.tar.gz Backup configuration to archive with name myname.tar.gz as specified"
-t, teleporter Backup configuration as an archive"
exit 0
}
@@ -200,8 +199,6 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
# Setup interface listening behavior of dnsmasq
delete_dnsmasq_setting "interface"
delete_dnsmasq_setting "local-service"
delete_dnsmasq_setting "except-interface"
delete_dnsmasq_setting "bind-interfaces"
if [[ "${DNSMASQ_LISTENING}" == "all" ]]; then
# Listen on all interfaces, permit all origins
@@ -210,7 +207,6 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
# Listen only on all interfaces, but only local subnets
add_dnsmasq_setting "local-service"
else
# Options "bind" and "single"
# Listen only on one interface
# Use eth0 as fallback interface if interface is missing in setupVars.conf
if [ -z "${PIHOLE_INTERFACE}" ]; then
@@ -218,11 +214,6 @@ trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC68345710423
fi
add_dnsmasq_setting "interface" "${PIHOLE_INTERFACE}"
if [[ "${DNSMASQ_LISTENING}" == "bind" ]]; then
# Really bind to interface
add_dnsmasq_setting "bind-interfaces"
fi
fi
if [[ "${CONDITIONAL_FORWARDING}" == true ]]; then
@@ -524,13 +515,13 @@ CustomizeAdLists() {
if CheckUrl "${address}"; then
if [[ "${args[2]}" == "enable" ]]; then
pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'"
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 1 WHERE address = '${address}'"
elif [[ "${args[2]}" == "disable" ]]; then
pihole-FTL sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'"
sqlite3 "${gravityDBfile}" "UPDATE adlist SET enabled = 0 WHERE address = '${address}'"
elif [[ "${args[2]}" == "add" ]]; then
pihole-FTL sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')"
sqlite3 "${gravityDBfile}" "INSERT OR IGNORE INTO adlist (address, comment) VALUES ('${address}', '${comment}')"
elif [[ "${args[2]}" == "del" ]]; then
pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'"
sqlite3 "${gravityDBfile}" "DELETE FROM adlist WHERE address = '${address}'"
else
echo "Not permitted"
return 1
@@ -541,6 +532,25 @@ CustomizeAdLists() {
fi
}
SetPrivacyMode() {
if [[ "${args[2]}" == "true" ]]; then
change_setting "API_PRIVACY_MODE" "true"
else
change_setting "API_PRIVACY_MODE" "false"
fi
}
ResolutionSettings() {
typ="${args[2]}"
state="${args[3]}"
if [[ "${typ}" == "forward" ]]; then
change_setting "API_GET_UPSTREAM_DNS_HOSTNAME" "${state}"
elif [[ "${typ}" == "clients" ]]; then
change_setting "API_GET_CLIENT_HOSTNAME" "${state}"
fi
}
AddDHCPStaticAddress() {
mac="${args[2]}"
ip="${args[3]}"
@@ -609,10 +619,9 @@ Example: 'pihole -a -i local'
Specify dnsmasq's network interface listening behavior
Interfaces:
local Only respond to queries from devices that
are at most one hop away (local devices)
single Respond only on interface ${PIHOLE_INTERFACE}
bind Bind only on interface ${PIHOLE_INTERFACE}
local Listen on all interfaces, but only allow queries from
devices that are at most one hop away (local devices)
single Listen only on ${PIHOLE_INTERFACE} interface
all Listen on all interfaces, permit all origins"
exit 0
fi
@@ -623,9 +632,6 @@ Interfaces:
elif [[ "${args[2]}" == "local" ]]; then
echo -e " ${INFO} Listening on all interfaces, permitting origins from one hop away (LAN)"
change_setting "DNSMASQ_LISTENING" "local"
elif [[ "${args[2]}" == "bind" ]]; then
echo -e " ${INFO} Binding on interface ${PIHOLE_INTERFACE}"
change_setting "DNSMASQ_LISTENING" "bind"
else
echo -e " ${INFO} Listening only on interface ${PIHOLE_INTERFACE}"
change_setting "DNSMASQ_LISTENING" "single"
@@ -641,17 +647,12 @@ Interfaces:
}
Teleporter() {
local filename
filename="${args[2]}"
if [[ -z "${filename}" ]]; then
local datetimestamp
local host
datetimestamp=$(date "+%Y-%m-%d_%H-%M-%S")
host=$(hostname)
host="${host//./_}"
filename="pi-hole-${host:-noname}-teleporter_${datetimestamp}.tar.gz"
fi
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "${filename}"
php /var/www/html/admin/scripts/pi-hole/php/teleporter.php > "pi-hole-${host:-noname}-teleporter_${datetimestamp}.tar.gz"
}
checkDomain()
@@ -687,12 +688,12 @@ addAudit()
done
# Insert only the domain here. The date_added field will be
# filled with its default value (date_added = current timestamp)
pihole-FTL sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};"
sqlite3 "${gravityDBfile}" "INSERT INTO domain_audit (domain) VALUES ${domains};"
}
clearAudit()
{
pihole-FTL sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;"
sqlite3 "${gravityDBfile}" "DELETE FROM domain_audit;"
}
SetPrivacyLevel() {
@@ -708,25 +709,10 @@ AddCustomDNSAddress() {
ip="${args[2]}"
host="${args[3]}"
reload="${args[4]}"
echo "${ip} ${host}" >> "${dnscustomfile}"
validHost="$(checkDomain "${host}")"
if [[ -n "${validHost}" ]]; then
if valid_ip "${ip}" || valid_ip6 "${ip}" ; then
echo "${ip} ${validHost}" >> "${dnscustomfile}"
else
echo -e " ${CROSS} Invalid IP has been passed"
exit 1
fi
else
echo " ${CROSS} Invalid Domain passed!"
exit 1
fi
# Restart dnsmasq to load new custom DNS entries only if $reload not false
if [[ ! $reload == "false" ]]; then
# Restart dnsmasq to load new custom DNS entries
RestartDNS
fi
}
RemoveCustomDNSAddress() {
@@ -734,25 +720,16 @@ RemoveCustomDNSAddress() {
ip="${args[2]}"
host="${args[3]}"
reload="${args[4]}"
validHost="$(checkDomain "${host}")"
if [[ -n "${validHost}" ]]; then
if valid_ip "${ip}" || valid_ip6 "${ip}" ; then
sed -i "/^${ip} ${validHost}$/Id" "${dnscustomfile}"
sed -i "/^${ip} ${host}$/d" "${dnscustomfile}"
else
echo -e " ${CROSS} Invalid IP has been passed"
exit 1
fi
else
echo " ${CROSS} Invalid Domain passed!"
exit 1
fi
# Restart dnsmasq to load new custom DNS entries only if reload is not false
if [[ ! $reload == "false" ]]; then
# Restart dnsmasq to update removed custom DNS entries
RestartDNS
fi
}
AddCustomCNAMERecord() {
@@ -760,25 +737,11 @@ AddCustomCNAMERecord() {
domain="${args[2]}"
target="${args[3]}"
reload="${args[4]}"
validDomain="$(checkDomain "${domain}")"
if [[ -n "${validDomain}" ]]; then
validTarget="$(checkDomain "${target}")"
if [[ -n "${validTarget}" ]]; then
echo "cname=${validDomain},${validTarget}" >> "${dnscustomcnamefile}"
else
echo " ${CROSS} Invalid Target Passed!"
exit 1
fi
else
echo " ${CROSS} Invalid Domain passed!"
exit 1
fi
# Restart dnsmasq to load new custom CNAME records only if reload is not false
if [[ ! $reload == "false" ]]; then
echo "cname=${domain},${target}" >> "${dnscustomcnamefile}"
# Restart dnsmasq to load new custom CNAME records
RestartDNS
fi
}
RemoveCustomCNAMERecord() {
@@ -786,13 +749,12 @@ RemoveCustomCNAMERecord() {
domain="${args[2]}"
target="${args[3]}"
reload="${args[4]}"
validDomain="$(checkDomain "${domain}")"
if [[ -n "${validDomain}" ]]; then
validTarget="$(checkDomain "${target}")"
if [[ -n "${validTarget}" ]]; then
sed -i "/cname=${validDomain},${validTarget}$/Id" "${dnscustomcnamefile}"
if [[ -n "${validDomain}" ]]; then
sed -i "/cname=${validDomain},${validTarget}$/d" "${dnscustomcnamefile}"
else
echo " ${CROSS} Invalid Target Passed!"
exit 1
@@ -802,10 +764,8 @@ RemoveCustomCNAMERecord() {
exit 1
fi
# Restart dnsmasq to update removed custom CNAME records only if $reload not false
if [[ ! $reload == "false" ]]; then
# Restart dnsmasq to update removed custom CNAME records
RestartDNS
fi
}
main() {
@@ -828,6 +788,8 @@ main() {
"layout" ) SetWebUILayout;;
"theme" ) SetWebUITheme;;
"-h" | "--help" ) helpFunc;;
"privacymode" ) SetPrivacyMode;;
"resolve" ) ResolutionSettings;;
"addstaticdhcp" ) AddDHCPStaticAddress;;
"removestaticdhcp" ) RemoveDHCPStaticAddress;;
"-e" | "email" ) SetAdminEmail "$3";;

View File

@@ -57,7 +57,7 @@ CREATE TABLE info
value TEXT NOT NULL
);
INSERT INTO "info" VALUES('version','15');
INSERT INTO "info" VALUES('version','14');
CREATE TABLE domain_audit
(
@@ -143,10 +143,12 @@ CREATE VIEW vw_gravity AS SELECT domain, adlist_by_group.group_id AS group_id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1);
CREATE VIEW vw_adlist AS SELECT DISTINCT address, id
CREATE VIEW vw_adlist AS SELECT DISTINCT address, adlist.id AS id
FROM adlist
WHERE enabled = 1
ORDER BY id;
LEFT JOIN adlist_by_group ON adlist_by_group.adlist_id = adlist.id
LEFT JOIN "group" ON "group".id = adlist_by_group.group_id
WHERE adlist.enabled = 1 AND (adlist_by_group.group_id IS NULL OR "group".enabled = 1)
ORDER BY adlist.id;
CREATE TRIGGER tr_domainlist_add AFTER INSERT ON domainlist
BEGIN

View File

@@ -12,17 +12,14 @@ INSERT OR REPLACE INTO "group" SELECT * FROM OLD."group";
INSERT OR REPLACE INTO domain_audit SELECT * FROM OLD.domain_audit;
INSERT OR REPLACE INTO domainlist SELECT * FROM OLD.domainlist;
DELETE FROM OLD.domainlist_by_group WHERE domainlist_id NOT IN (SELECT id FROM OLD.domainlist);
INSERT OR REPLACE INTO domainlist_by_group SELECT * FROM OLD.domainlist_by_group;
INSERT OR REPLACE INTO adlist SELECT * FROM OLD.adlist;
DELETE FROM OLD.adlist_by_group WHERE adlist_id NOT IN (SELECT id FROM OLD.adlist);
INSERT OR REPLACE INTO adlist_by_group SELECT * FROM OLD.adlist_by_group;
INSERT OR REPLACE INTO info SELECT * FROM OLD.info;
INSERT OR REPLACE INTO client SELECT * FROM OLD.client;
DELETE FROM OLD.client_by_group WHERE client_id NOT IN (SELECT id FROM OLD.client);
INSERT OR REPLACE INTO client_by_group SELECT * FROM OLD.client_by_group;

View File

@@ -1,2 +0,0 @@
#; Pi-hole FTL config file
#; Comments should start with #; to avoid issues with PHP and bash reading this file

View File

@@ -21,19 +21,12 @@ start() {
else
# Touch files to ensure they exist (create if non-existing, preserve if existing)
mkdir -pm 0755 /run/pihole
[ ! -f /run/pihole-FTL.pid ] && install -m 644 -o pihole -g pihole /dev/null /run/pihole-FTL.pid
[ ! -f /run/pihole-FTL.port ] && install -m 644 -o pihole -g pihole /dev/null /run/pihole-FTL.port
[ ! -f /var/log/pihole-FTL.log ] && install -m 644 -o pihole -g pihole /dev/null /var/log/pihole-FTL.log
[ ! -f /var/log/pihole.log ] && install -m 644 -o pihole -g pihole /dev/null /var/log/pihole.log
[ ! -f /etc/pihole/dhcp.leases ] && install -m 644 -o pihole -g pihole /dev/null /etc/pihole/dhcp.leases
touch /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases
# Ensure that permissions are set so that pihole-FTL can edit all necessary files
chown pihole:pihole /run/pihole /etc/pihole /var/log/pihole.log /var/log/pihole.log /etc/pihole/dhcp.leases
# Ensure that permissions are set so that pihole-FTL can edit the files. We ignore errors as the file may not (yet) exist
chmod -f 0644 /etc/pihole/macvendor.db /etc/pihole/dhcp.leases /var/log/pihole-FTL.log /var/log/pihole.log
chown pihole:pihole /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases /run/pihole /etc/pihole
chmod 0644 /run/pihole-FTL.pid /run/pihole-FTL.port /var/log/pihole-FTL.log /var/log/pihole.log /etc/pihole/dhcp.leases /etc/pihole/macvendor.db
# Chown database files to the user FTL runs as. We ignore errors as the files may not (yet) exist
chown -f pihole:pihole /etc/pihole/pihole-FTL.db /etc/pihole/gravity.db /etc/pihole/macvendor.db
# Chown database file permissions so that the pihole group (web interface) can edit the file. We ignore errors as the files may not (yet) exist
chmod -f 0664 /etc/pihole/pihole-FTL.db
if setcap CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_NET_ADMIN,CAP_SYS_NICE,CAP_IPC_LOCK,CAP_CHOWN+eip "/usr/bin/pihole-FTL"; then
su -s /bin/sh -c "/usr/bin/pihole-FTL" pihole
else

View File

@@ -11,15 +11,6 @@ $serverName = htmlspecialchars($_SERVER["SERVER_NAME"]);
// Remove external ipv6 brackets if any
$serverName = preg_replace('/^\[(.*)\]$/', '${1}', $serverName);
if (!is_file("/etc/pihole/setupVars.conf"))
die("[ERROR] File not found: <code>/etc/pihole/setupVars.conf</code>");
// Get values from setupVars.conf
$setupVars = parse_ini_file("/etc/pihole/setupVars.conf");
$svPasswd = !empty($setupVars["WEBPASSWORD"]);
$svEmail = (!empty($setupVars["ADMIN_EMAIL"]) && filter_var($setupVars["ADMIN_EMAIL"], FILTER_VALIDATE_EMAIL)) ? $setupVars["ADMIN_EMAIL"] : "";
unset($setupVars);
// Set landing page location, found within /var/www/html/
$landPage = "../landing.php";
@@ -34,21 +25,6 @@ if (!empty($_SERVER["FQDN"])) {
array_push($authorizedHosts, $_SERVER["VIRTUAL_HOST"]);
}
// Set which extension types render as Block Page (Including "" for index.ext)
$validExtTypes = array("asp", "htm", "html", "php", "rss", "xml", "");
// Get extension of current URL
$currentUrlExt = pathinfo($_SERVER["REQUEST_URI"], PATHINFO_EXTENSION);
// Set mobile friendly viewport
$viewPort = '<meta name="viewport" content="width=device-width, initial-scale=1">';
// Set response header
function setHeader($type = "x") {
header("X-Pi-hole: A black hole for Internet advertisements.");
if (isset($type) && $type === "js") header("Content-Type: application/javascript");
}
// Determine block page type
if ($serverName === "pi.hole"
|| (!empty($_SERVER["VIRTUAL_HOST"]) && $serverName === $_SERVER["VIRTUAL_HOST"])) {
@@ -71,325 +47,26 @@ if ($serverName === "pi.hole"
<html lang='en'>
<head>
<meta charset='utf-8'>
$viewPort
<meta name='viewport' content='width=device-width, initial-scale=1'>
<title>● $serverName</title>
<link rel='stylesheet' href='/pihole/blockingpage.css'>
<link rel='shortcut icon' href='/admin/img/favicons/favicon.ico' type='image/x-icon'>
<link rel='shortcut icon' href='admin/img/favicons/favicon.ico' type='image/x-icon'>
<style>
#splashpage { background: #222; color: rgba(255, 255, 255, 0.7); text-align: center; }
#splashpage img { margin: 5px; width: 256px; }
#splashpage b { color: inherit; }
</style>
</head>
<body id='splashpage'>
<div id="pihole_card">
<img src='/admin/img/logo.svg' alt='Pi-hole logo' id="pihole_logo_splash" />
<img src='admin/img/logo.svg' alt='Pi-hole logo' width='256' height='377'>
<br>
<p>Pi-<strong>hole</strong>: Your black hole for Internet advertisements</p>
<a href='/admin'>Did you mean to go to the admin panel?</a>
</div>
</body>
</html>
EOT;
exit($splashPage);
} elseif ($currentUrlExt === "js") {
// Serve Pi-hole JavaScript for blocked domains requesting JS
exit(setHeader("js").'var x = "Pi-hole: A black hole for Internet advertisements."');
} elseif (strpos($_SERVER["REQUEST_URI"], "?") !== FALSE && isset($_SERVER["HTTP_REFERER"])) {
// Serve blank image upon receiving REQUEST_URI w/ query string & HTTP_REFERRER
// e.g: An iframe of a blocked domain
exit(setHeader().'<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8"><script>window.close();</script>
</head>
<body>
<img src="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=">
</body>
</html>');
} elseif (!in_array($currentUrlExt, $validExtTypes) || substr_count($_SERVER["REQUEST_URI"], "?")) {
// Serve SVG upon receiving non $validExtTypes URL extension or query string
// e.g: Not an iframe of a blocked domain, such as when browsing to a file/query directly
// QoL addition: Allow the SVG to be clicked on in order to quickly show the full Block Page
$blockImg = '<a href="/">
<svg xmlns="http://www.w3.org/2000/svg" width="110" height="16">
<circle cx="8" cy="8" r="7" fill="none" stroke="rgba(152,2,2,.5)" stroke-width="2"/>
<path fill="rgba(152,2,2,.5)" d="M11.526 3.04l1.414 1.415-8.485 8.485-1.414-1.414z"/>
<text x="19.3" y="12" opacity=".3" style="font:11px Arial">
Blocked by Pi-hole
</text>
</svg>
</a>';
exit(setHeader()."<!doctype html>
<html lang='en'>
<head>
<meta charset='utf-8'>
$viewPort
</head>
<body>$blockImg</body>
</html>");
}
/* Start processing Block Page from here */
exit(header("HTTP/1.1 404 Not Found"));
// Define admin email address text based off $svEmail presence
$bpAskAdmin = !empty($svEmail) ? '<a href="mailto:'.$svEmail.'?subject=Site Blocked: '.$serverName.'"></a>' : "<span/>";
// Get possible non-standard location of FTL's database
$FTLsettings = parse_ini_file("/etc/pihole/pihole-FTL.conf");
if (isset($FTLsettings["GRAVITYDB"])) {
$gravityDBFile = $FTLsettings["GRAVITYDB"];
} else {
$gravityDBFile = "/etc/pihole/gravity.db";
}
// Connect to gravity.db
try {
$db = new SQLite3($gravityDBFile, SQLITE3_OPEN_READONLY);
} catch (Exception $exception) {
die("[ERROR]: Failed to connect to gravity.db");
}
// Get all adlist addresses
$adlistResults = $db->query("SELECT address FROM vw_adlist");
$adlistsUrls = array();
while ($row = $adlistResults->fetchArray()) {
array_push($adlistsUrls, $row[0]);
}
if (empty($adlistsUrls))
die("[ERROR]: There are no adlists enabled");
// Get total number of blocklists (Including Whitelist, Blacklist & Wildcard lists)
$adlistsCount = count($adlistsUrls) + 3;
// Set query timeout
ini_set("default_socket_timeout", 3);
// Logic for querying blocklists
function queryAds($serverName) {
// Determine the time it takes while querying adlists
$preQueryTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
$queryAdsURL = sprintf(
"http://127.0.0.1:%s/admin/scripts/pi-hole/php/queryads.php?domain=%s&bp",
$_SERVER["SERVER_PORT"],
$serverName
);
$queryAds = file($queryAdsURL, FILE_IGNORE_NEW_LINES);
$queryAds = array_values(array_filter(preg_replace("/data:\s+/", "", $queryAds)));
$queryTime = sprintf("%.0f", (microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"]) - $preQueryTime);
// Exception Handling
try {
// Define Exceptions
if (strpos($queryAds[0], "No exact results") !== FALSE) {
// Return "none" into $queryAds array
return array("0" => "none");
} else if ($queryTime >= ini_get("default_socket_timeout")) {
// Connection Timeout
throw new Exception ("Connection timeout (".ini_get("default_socket_timeout")."s)");
} elseif (!strpos($queryAds[0], ".") !== false) {
// Unknown $queryAds output
throw new Exception ("Unhandled error message (<code>$queryAds[0]</code>)");
}
return $queryAds;
} catch (Exception $e) {
// Return exception as array
return array("0" => "error", "1" => $e->getMessage());
}
}
// Get results of queryads.php exact search
$queryAds = queryAds($serverName);
// Pass error through to Block Page
if ($queryAds[0] === "error")
die("[ERROR]: Unable to parse results from <i>queryads.php</i>: <code>".$queryAds[1]."</code>");
// Count total number of matching blocklists
$featuredTotal = count($queryAds);
// Place results into key => value array
$queryResults = null;
foreach ($queryAds as $str) {
$value = explode(" ", $str);
@$queryResults[$value[0]] .= "$value[1]";
}
// Determine if domain has been blacklisted, whitelisted, wildcarded or CNAME blocked
if (strpos($queryAds[0], "blacklist") !== FALSE) {
$notableFlagClass = "blacklist";
$adlistsUrls = array("π" => substr($queryAds[0], 2));
} elseif (strpos($queryAds[0], "whitelist") !== FALSE) {
$notableFlagClass = "noblock";
$adlistsUrls = array("π" => substr($queryAds[0], 2));
$wlInfo = "recentwl";
} elseif (strpos($queryAds[0], "wildcard") !== FALSE) {
$notableFlagClass = "wildcard";
$adlistsUrls = array("π" => substr($queryAds[0], 2));
} elseif ($queryAds[0] === "none") {
$featuredTotal = "0";
$notableFlagClass = "noblock";
// QoL addition: Determine appropriate info message if CNAME exists
// Suggests to the user that $serverName has a CNAME (alias) that may be blocked
$dnsRecord = dns_get_record("$serverName")[0];
if (array_key_exists("target", $dnsRecord)) {
$wlInfo = $dnsRecord['target'];
} else {
$wlInfo = "unknown";
}
}
// Set #bpOutput notification
$wlOutputClass = (isset($wlInfo) && $wlInfo === "recentwl") ? $wlInfo : "hidden";
$wlOutput = (isset($wlInfo) && $wlInfo !== "recentwl") ? "<a href='http://$wlInfo'>$wlInfo</a>" : "";
// Get Pi-hole Core version
$phVersion = exec("cd /etc/.pihole/ && git describe --long --tags");
// Print $execTime on development branches
// Testing for - is marginally faster than "git rev-parse --abbrev-ref HEAD"
if (explode("-", $phVersion)[1] != "0")
$execTime = microtime(true)-$_SERVER["REQUEST_TIME_FLOAT"];
// Please Note: Text is added via CSS to allow an admin to provide a localized
// language without the need to edit this file
setHeader();
?>
<!doctype html>
<!-- Pi-hole: A black hole for Internet advertisements
* (c) 2017 Pi-hole, LLC (https://pi-hole.net)
* Network-wide ad blocking via your own hardware.
*
* This file is copyright under the latest version of the EUPL. -->
<html>
<head>
<meta charset="utf-8">
<?=$viewPort ?>
<meta name="robots" content="noindex,nofollow">
<meta http-equiv="x-dns-prefetch-control" content="off">
<link rel="stylesheet" href="pihole/blockingpage.css">
<link rel="shortcut icon" href="admin/img/favicons/favicon.ico" type="image/x-icon">
<title>● <?=$serverName ?></title>
<script src="admin/scripts/vendor/jquery.min.js"></script>
<script>
window.onload = function () {
<?php
// Remove href fallback from "Back to safety" button
if ($featuredTotal > 0) {
echo '$("#bpBack").removeAttr("href");';
// Enable whitelisting if JS is available
echo '$("#bpWhitelist").prop("disabled", false);';
// Enable password input if necessary
if (!empty($svPasswd)) {
echo '$("#bpWLPassword").attr("placeholder", "Password");';
echo '$("#bpWLPassword").prop("disabled", false);';
}
// Otherwise hide the input
else {
echo '$("#bpWLPassword").hide();';
}
}
?>
}
</script>
</head>
<body id="blockpage"><div id="bpWrapper">
<header>
<h1 id="bpTitle">
<a class="title" href="/"><?php //Website Blocked ?></a>
</h1>
<div class="spc"></div>
<input id="bpAboutToggle" type="checkbox">
<div id="bpAbout">
<div class="aboutPH">
<div class="aboutImg"></div>
<p>Open Source Ad Blocker
<small>Designed for Raspberry Pi</small>
</p>
</div>
<div class="aboutLink">
<a class="linkPH" href="https://docs.pi-hole.net/"><?php //About PH ?></a>
<?php if (!empty($svEmail)) echo '<a class="linkEmail" href="mailto:'.$svEmail.'"></a>'; ?>
</div>
</div>
<div id="bpAlt">
<label class="altBtn" for="bpAboutToggle"><?php //Why am I here? ?></label>
</div>
</header>
<main>
<div id="bpOutput" class="<?=$wlOutputClass ?>"><?=$wlOutput ?></div>
<div id="bpBlock">
<p class="blockMsg"><?=$serverName ?></p>
</div>
<?php if(isset($notableFlagClass)) { ?>
<div id="bpFlag">
<p class="flagMsg <?=$notableFlagClass ?>"></p>
</div>
<?php } ?>
<div id="bpHelpTxt"><?=$bpAskAdmin ?></div>
<div id="bpButtons" class="buttons">
<a id="bpBack" onclick="javascript:history.back()" href="about:home"></a>
<?php if ($featuredTotal > 0) echo '<label id="bpInfo" for="bpMoreToggle"></label>'; ?>
</div>
<input id="bpMoreToggle" type="checkbox">
<div id="bpMoreInfo">
<span id="bpFoundIn"><span><?=$featuredTotal ?></span><?=$adlistsCount ?></span>
<pre id='bpQueryOutput'><?php if ($featuredTotal > 0) foreach ($queryResults as $num => $value) { echo "<span>[$num]:</span>$adlistsUrls[$num]\n"; } ?></pre>
<form id="bpWLButtons" class="buttons">
<input id="bpWLDomain" type="text" value="<?=$serverName ?>" disabled>
<input id="bpWLPassword" type="password" placeholder="JavaScript disabled" disabled>
<button id="bpWhitelist" type="button" disabled></button>
</form>
</div>
</main>
<footer><span><?=date("l g:i A, F dS"); ?>.</span> Pi-hole <?=$phVersion ?> (<?=gethostname()."/".$_SERVER["SERVER_ADDR"]; if (isset($execTime)) printf("/%.2fs", $execTime); ?>)</footer>
</div>
<script>
function add() {
$("#bpOutput").removeClass("hidden error exception");
$("#bpOutput").addClass("add");
var domain = "<?=$serverName ?>";
var pw = $("#bpWLPassword");
if(domain.length === 0) {
return;
}
$.ajax({
url: "/admin/scripts/pi-hole/php/add.php",
method: "post",
data: {"domain":domain, "list":"white", "pw":pw.val()},
success: function(response) {
if(response.indexOf("Pi-hole blocking") !== -1) {
setTimeout(function(){window.location.reload(1);}, 10000);
$("#bpOutput").removeClass("add");
$("#bpOutput").addClass("success");
$("#bpOutput").html("");
} else {
$("#bpOutput").removeClass("add");
$("#bpOutput").addClass("error");
$("#bpOutput").html(""+response+"");
}
},
error: function(jqXHR, exception) {
$("#bpOutput").removeClass("add");
$("#bpOutput").addClass("exception");
$("#bpOutput").html("");
}
});
}
<?php if ($featuredTotal > 0) { ?>
$(document).keypress(function(e) {
if(e.which === 13 && $("#bpWLPassword").is(":focus")) {
add();
}
});
$("#bpWhitelist").on("click", function() {
add();
});
<?php } ?>
</script>
</body></html>

View File

@@ -85,12 +85,5 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
url.access-deny = ("")
}
# allow teleporter and API qr code iframe on settings page
$HTTP["url"] =~ "/(teleporter|api_token)\.php$" {
$HTTP["referer"] =~ "/admin/settings\.php" {
setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" )
}
}
# Default expire header
expire.url = ( "" => "access plus 0 seconds" )

View File

@@ -93,12 +93,5 @@ $HTTP["url"] =~ "^/admin/\.(.*)" {
url.access-deny = ("")
}
# allow teleporter and API qr code iframe on settings page
$HTTP["url"] =~ "/(teleporter|api_token)\.php$" {
$HTTP["referer"] =~ "/admin/settings\.php" {
setenv.add-response-header = ( "X-Frame-Options" => "SAMEORIGIN" )
}
}
# Default expire header
expire.url = ( "" => "access plus 0 seconds" )

View File

@@ -2,7 +2,7 @@
# shellcheck disable=SC1090
# Pi-hole: A black hole for Internet advertisements
# (c) 2017-2021 Pi-hole, LLC (https://pi-hole.net)
# (c) 2017-2018 Pi-hole, LLC (https://pi-hole.net)
# Network-wide ad blocking via your own hardware.
#
# Installs and Updates Pi-hole
@@ -74,7 +74,7 @@ PI_HOLE_FILES=(chronometer list piholeDebug piholeLogFlush setupLCD update versi
PI_HOLE_INSTALL_DIR="/opt/pihole"
PI_HOLE_CONFIG_DIR="/etc/pihole"
PI_HOLE_BIN_DIR="/usr/local/bin"
PI_HOLE_BLOCKPAGE_DIR="${webroot}/pihole"
PI_HOLE_404_DIR="${webroot}/pihole"
if [ -z "$useUpdateVars" ]; then
useUpdateVars=false
fi
@@ -172,7 +172,7 @@ os_check() {
local remote_os_domain valid_os valid_version valid_response detected_os detected_version display_warning cmdResult digReturnCode response
remote_os_domain=${OS_CHECK_DOMAIN_NAME:-"versions.pi-hole.net"}
detected_os=$(grep '^ID=' /etc/os-release | cut -d '=' -f2 | tr -d '"')
detected_os=$(grep "\bID\b" /etc/os-release | cut -d '=' -f2 | tr -d '"')
detected_version=$(grep VERSION_ID /etc/os-release | cut -d '=' -f2 | tr -d '"')
cmdResult="$(dig +short -t txt "${remote_os_domain}" @ns1.pi-hole.net 2>&1; echo $?)"
@@ -261,8 +261,8 @@ os_check() {
# Compatibility
package_manager_detect() {
# First check to see if apt-get is installed.
if is_command apt-get ; then
# First check to see if apt-get is installed.
if is_command apt-get ; then
# Set some global variables here
# We don't set them earlier since the installed package manager might be rpm, so these values would be different
PKG_MANAGER="apt-get"
@@ -287,12 +287,12 @@ package_manager_detect() {
# Packages required to run this install script (stored as an array)
INSTALLER_DEPS=(git iproute2 whiptail ca-certificates)
# Packages required to run Pi-hole (stored as an array)
PIHOLE_DEPS=(cron curl iputils-ping psmisc sudo unzip idn2 libcap2-bin dns-root-data libcap2 netcat-openbsd)
PIHOLE_DEPS=(cron curl iputils-ping lsof psmisc sudo unzip idn2 sqlite3 libcap2-bin dns-root-data libcap2)
# Packages required for the Web admin interface (stored as an array)
# It's useful to separate this from Pi-hole, since the two repos are also setup separately
PIHOLE_WEB_DEPS=(lighttpd "${phpVer}-common" "${phpVer}-cgi" "${phpVer}-sqlite3" "${phpVer}-xml" "${phpVer}-intl")
# Prior to PHP8.0, JSON functionality is provided as dedicated module, required by Pi-hole AdminLTE: https://www.php.net/manual/json.installation.php
if [[ -z "${phpInsMajor}" || "${phpInsMajor}" -lt 8 ]]; then
if [[ "${phpInsNewer}" != true || "${phpInsMajor}" -lt 8 ]]; then
PIHOLE_WEB_DEPS+=("${phpVer}-json")
fi
# The Web server user,
@@ -318,8 +318,8 @@ package_manager_detect() {
return 0
}
# If apt-get is not found, check for rpm.
elif is_command rpm ; then
# If apt-get is not found, check for rpm.
elif is_command rpm ; then
# Then check if dnf or yum is the package manager
if is_command dnf ; then
PKG_MANAGER="dnf"
@@ -332,28 +332,28 @@ package_manager_detect() {
PKG_COUNT="${PKG_MANAGER} check-update | egrep '(.i686|.x86|.noarch|.arm|.src)' | wc -l"
OS_CHECK_DEPS=(grep bind-utils)
INSTALLER_DEPS=(git iproute newt procps-ng which chkconfig ca-certificates)
PIHOLE_DEPS=(cronie curl findutils sudo unzip libidn2 psmisc libcap nmap-ncat)
PIHOLE_DEPS=(cronie curl findutils sudo unzip libidn2 psmisc sqlite libcap lsof)
PIHOLE_WEB_DEPS=(lighttpd lighttpd-fastcgi php-common php-cli php-pdo php-xml php-json php-intl)
LIGHTTPD_USER="lighttpd"
LIGHTTPD_GROUP="lighttpd"
LIGHTTPD_CFG="lighttpd.conf.fedora"
# If neither apt-get or yum/dnf package managers were found
else
# If neither apt-get or yum/dnf package managers were found
else
# we cannot install required packages
printf " %b No supported package manager found\\n" "${CROSS}"
# so exit the installer
exit
fi
fi
}
select_rpm_php(){
# If the host OS is Fedora,
if grep -qiE 'fedora|fedberry' /etc/redhat-release; then
# If the host OS is Fedora,
if grep -qiE 'fedora|fedberry' /etc/redhat-release; then
# all required packages should be available by default with the latest fedora release
: # continue
# or if host OS is CentOS,
elif grep -qiE 'centos|scientific' /etc/redhat-release; then
# or if host OS is CentOS,
elif grep -qiE 'centos|scientific' /etc/redhat-release; then
# Pi-Hole currently supports CentOS 7+ with PHP7+
SUPPORTED_CENTOS_VERSION=7
SUPPORTED_CENTOS_PHP_VERSION=7
@@ -427,8 +427,8 @@ select_rpm_php(){
else
printf " %b Continuing installation with unsupported RPM based distribution\\n" "${INFO}"
fi
fi
fi
fi
fi
}
# A function for checking if a directory is a git repository
@@ -629,12 +629,12 @@ welcomeDialogs() {
IMPORTANT: If you have not already done so, you must ensure that this device has a static IP. Either through DHCP reservation, or by manually assigning one. Depending on your operating system, there are many ways to achieve this.
Choose yes to indicate that you have understood this message, and wish to continue" "${r}" "${c}"; then
#Nothing to do, continue
#Nothing to do, continue
echo
else
else
printf " %b Installer exited at static IP message.\\n" "${INFO}"
exit 1
fi
fi
}
# A function that lets the user pick an interface to use with Pi-hole
@@ -674,7 +674,7 @@ chooseInterface() {
# Feed the available interfaces into this while loop
done <<< "${availableInterfaces}"
# The whiptail command that will be run, stored in a variable
chooseInterfaceCmd=(whiptail --separate-output --radiolist "Choose An Interface (press space to toggle selection)" "${r}" "${c}" 6)
chooseInterfaceCmd=(whiptail --separate-output --radiolist "Choose An Interface (press space to toggle selection)" "${r}" "${c}" "${interfaceCount}")
# Now run the command using the interfaces saved into the array
chooseInterfaceOptions=$("${chooseInterfaceCmd[@]}" "${interfacesArray[@]}" 2>&1 >/dev/tty) || \
# If the user chooses Cancel, exit
@@ -761,6 +761,7 @@ collect_v4andv6_information() {
if [[ -f "/etc/dhcpcd.conf" ]]; then
# configure networking via dhcpcd
getStaticIPv4Settings
setDHCPCD
fi
find_IPv6_information
printf " %b IPv6 address: %s\\n" "${INFO}" "${IPV6_ADDRESS}"
@@ -769,28 +770,17 @@ collect_v4andv6_information() {
getStaticIPv4Settings() {
# Local, named variables
local ipSettingsCorrect
local DHCPChoice
# Ask if the user wants to use DHCP settings as their static IP
# This is useful for users that are using DHCP reservations; then we can just use the information gathered via our functions
DHCPChoice=$(whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --menu --separate-output "Do you want to use your current network settings as a static address? \\n
IP address: ${IPV4_ADDRESS} \\n
Gateway: ${IPv4gw} \\n" "${r}" "${c}" 3\
"Yes" "Set static IP using current values" \
"No" "Set static IP using custom values" \
"Skip" "I will set a static IP later, or have already done so" 3>&2 2>&1 1>&3) || \
{ printf " %bCancel was selected, exiting installer%b\\n" "${COL_LIGHT_RED}" "${COL_NC}"; exit 1; }
case ${DHCPChoice} in
"Yes")
if whiptail --backtitle "Calibrating network interface" --title "Static IP Address" --yesno "Do you want to use your current network settings as a static address?
IP address: ${IPV4_ADDRESS}
Gateway: ${IPv4gw}" "${r}" "${c}"; then
# If they choose yes, let the user know that the IP address will not be available via DHCP and may cause a conflict.
whiptail --msgbox --backtitle "IP information" --title "FYI: IP Conflict" "It is possible your router could still try to assign this IP to a device, which would cause a conflict. But in most cases the router is smart enough to not do that.
If you are worried, either manually set the address, or modify the DHCP reservation pool so it does not include the IP you want.
It is also possible to use a DHCP reservation, but if you are going to do that, you might as well set a static address." "${r}" "${c}"
If you are worried, either manually set the address, or modify the DHCP reservation pool so it does not include the IP you want.
It is also possible to use a DHCP reservation, but if you are going to do that, you might as well set a static address." "${r}" "${c}"
# Nothing else to do since the variables are already set above
setDHCPCD
;;
"No")
else
# Otherwise, we need to ask the user to input their desired settings.
# Start by getting the IPv4 address (pre-filling it with info gathered from DHCP)
# Start a loop to let the user enter their information with the chance to go back and edit it if necessary
@@ -819,9 +809,8 @@ getStaticIPv4Settings() {
ipSettingsCorrect=False
fi
done
setDHCPCD
;;
esac
# End the if statement for DHCP vs. static
fi
}
# Configure networking via dhcpcd
@@ -856,7 +845,7 @@ valid_ip() {
# Regex matching an optional port (starting with '#') range of 1-65536
local portelem="(#(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[1-9][0-9]{0,3}|0))?";
# Build a full IPv4 regex from the above subexpressions
local regex="^${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}${portelem}$"
local regex="^${ipv4elem}\.${ipv4elem}\.${ipv4elem}\.${ipv4elem}${portelem}$"
# Evaluate the regex, and return the result
[[ $ip =~ ${regex} ]]
@@ -1128,11 +1117,8 @@ chooseBlocklists() {
appendToListsFile "${choice}"
done
# Create an empty adList file with appropriate permissions.
if [ ! -f "${adlistFile}" ]; then
install -m 644 /dev/null "${adlistFile}"
else
touch "${adlistFile}"
chmod 644 "${adlistFile}"
fi
}
# Accept a string parameter, it must be one of the default lists
@@ -1302,10 +1288,10 @@ installConfigs() {
echo "${DNS_SERVERS}" > "${PI_HOLE_CONFIG_DIR}/dns-servers.conf"
chmod 644 "${PI_HOLE_CONFIG_DIR}/dns-servers.conf"
# Install template file if it does not exist
# Install empty file if it does not exist
if [[ ! -r "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" ]]; then
install -d -m 0755 ${PI_HOLE_CONFIG_DIR}
if ! install -T -o pihole -m 664 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL.conf" "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" &>/dev/null; then
if ! install -o pihole -m 664 /dev/null "${PI_HOLE_CONFIG_DIR}/pihole-FTL.conf" &>/dev/null; then
printf " %bError: Unable to initialize configuration file %s/pihole-FTL.conf\\n" "${COL_LIGHT_RED}" "${PI_HOLE_CONFIG_DIR}"
return 1
fi
@@ -1333,12 +1319,11 @@ installConfigs() {
# and copy in the config file Pi-hole needs
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/advanced/${LIGHTTPD_CFG} "${lighttpdConfig}"
# Make sure the external.conf file exists, as lighttpd v1.4.50 crashes without it
if [ ! -f /etc/lighttpd/external.conf ]; then
install -m 644 /dev/null /etc/lighttpd/external.conf
fi
touch /etc/lighttpd/external.conf
chmod 644 /etc/lighttpd/external.conf
# If there is a custom block page in the html/pihole directory, replace 404 handler in lighttpd config
if [[ -f "${PI_HOLE_BLOCKPAGE_DIR}/custom.php" ]]; then
sed -i 's/^\(server\.error-handler-404\s*=\s*\).*$/\1"\/pihole\/custom\.php"/' "${lighttpdConfig}"
sed -i 's/^\(server\.error-handler-404\s*=\s*\).*$/\1"pihole\/custom\.php"/' "${lighttpdConfig}"
fi
# Make the directories if they do not exist and set the owners
mkdir -p /run/lighttpd
@@ -1375,12 +1360,7 @@ install_manpage() {
# Testing complete, copy the files & update the man db
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole.8 /usr/local/share/man/man8/pihole.8
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole-FTL.8 /usr/local/share/man/man8/pihole-FTL.8
# remove previously installed "pihole-FTL.conf.5" man page
if [[ -f "/usr/local/share/man/man5/pihole-FTL.conf.5" ]]; then
rm /usr/local/share/man/man5/pihole-FTL.conf.5
fi
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole-FTL.conf.5 /usr/local/share/man/man5/pihole-FTL.conf.5
if mandb -q &>/dev/null; then
# Updated successfully
printf "%b %b man pages installed and database updated\\n" "${OVER}" "${TICK}"
@@ -1388,7 +1368,7 @@ install_manpage() {
else
# Something is wrong with the system's man installation, clean up
# our files, (leave everything how we found it).
rm /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8
rm /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8 /usr/local/share/man/man5/pihole-FTL.conf.5
printf "%b %b man page db not updated, man pages not installed\\n" "${OVER}" "${CROSS}"
fi
}
@@ -1501,14 +1481,8 @@ update_package_cache() {
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
else
# Otherwise, show an error and exit
# In case we used apt-get and apt is also available, we use this as recommendation as we have seen it
# gives more user-friendly (interactive) advice
if [[ ${PKG_MANAGER} == "apt-get" ]] && is_command apt ; then
UPDATE_PKG_CACHE="apt update"
fi
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
printf " %bError: Unable to update package cache. Please try \"%s\"%b\\n" "${COL_LIGHT_RED}" "sudo ${UPDATE_PKG_CACHE}" "${COL_NC}"
printf " %bError: Unable to update package cache. Please try \"%s\"%b" "${COL_LIGHT_RED}" "sudo ${UPDATE_PKG_CACHE}" "${COL_NC}"
return 1
fi
}
@@ -1597,18 +1571,18 @@ install_dependent_packages() {
# Install the Web interface dashboard
installPiholeWeb() {
printf "\\n %b Installing blocking page...\\n" "${INFO}"
printf "\\n %b Installing 404 page...\\n" "${INFO}"
local str="Creating directory for blocking page, and copying files"
local str="Creating directory for 404 page, and copying files"
printf " %b %s..." "${INFO}" "${str}"
# Install the directory,
install -d -m 0755 ${PI_HOLE_BLOCKPAGE_DIR}
# and the blockpage
install -D -m 644 ${PI_HOLE_LOCAL_REPO}/advanced/{index,blockingpage}.* ${PI_HOLE_BLOCKPAGE_DIR}/
# Install the directory
install -d -m 0755 ${PI_HOLE_404_DIR}
# and the 404 handler
install -D -m 644 ${PI_HOLE_LOCAL_REPO}/advanced/index.php ${PI_HOLE_404_DIR}/
# Remove superseded file
if [[ -e "${PI_HOLE_BLOCKPAGE_DIR}/index.js" ]]; then
rm "${PI_HOLE_BLOCKPAGE_DIR}/index.js"
if [[ -e "${PI_HOLE_404_DIR}/index.js" ]]; then
rm "${PI_HOLE_404_DIR}/index.js"
fi
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
@@ -1740,7 +1714,7 @@ finalExports() {
# If the setup variable file exists,
if [[ -e "${setupVars}" ]]; then
# update the variables in the file
sed -i.update.bak '/PIHOLE_INTERFACE/d;/IPV4_ADDRESS/d;/IPV6_ADDRESS/d;/PIHOLE_DNS_1\b/d;/PIHOLE_DNS_2\b/d;/QUERY_LOGGING/d;/INSTALL_WEB_SERVER/d;/INSTALL_WEB_INTERFACE/d;/LIGHTTPD_ENABLED/d;/CACHE_SIZE/d;/DNS_FQDN_REQUIRED/d;/DNS_BOGUS_PRIV/d;/DNSMASQ_LISTENING/d;' "${setupVars}"
sed -i.update.bak '/PIHOLE_INTERFACE/d;/IPV4_ADDRESS/d;/IPV6_ADDRESS/d;/PIHOLE_DNS_1\b/d;/PIHOLE_DNS_2\b/d;/QUERY_LOGGING/d;/INSTALL_WEB_SERVER/d;/INSTALL_WEB_INTERFACE/d;/LIGHTTPD_ENABLED/d;/CACHE_SIZE/d;/DNS_FQDN_REQUIRED/d;/DNS_BOGUS_PRIV/d;' "${setupVars}"
fi
# echo the information to the user
{
@@ -1756,7 +1730,6 @@ finalExports() {
echo "CACHE_SIZE=${CACHE_SIZE}"
echo "DNS_FQDN_REQUIRED=${DNS_FQDN_REQUIRED:-true}"
echo "DNS_BOGUS_PRIV=${DNS_BOGUS_PRIV:-true}"
echo "DNSMASQ_LISTENING=${DNSMASQ_LISTENING:-local}"
}>> "${setupVars}"
chmod 644 "${setupVars}"
@@ -1872,7 +1845,7 @@ checkSelinux() {
local CURRENT_SELINUX
local SELINUX_ENFORCING=0
# Check for SELinux configuration file and getenforce command
if [[ -f /etc/selinux/config ]] && is_command getenforce; then
if [[ -f /etc/selinux/config ]] && command -v getenforce &> /dev/null; then
# Check the default SELinux mode
DEFAULT_SELINUX=$(awk -F= '/^SELINUX=/ {print $2}' /etc/selinux/config)
case "${DEFAULT_SELINUX,,}" in
@@ -1941,6 +1914,8 @@ IPv6: ${IPV6_ADDRESS:-"Not Configured"}
If you have not done so already, the above IP should be set to static.
The install log is in /etc/pihole.
${additional}" "${r}" "${c}"
}
@@ -2104,6 +2079,7 @@ clone_or_update_repos() {
# shellcheck disable=SC2120
FTLinstall() {
# Local, named variables
local latesttag
local str="Downloading and Installing FTL"
printf " %b %s..." "${INFO}" "${str}"
@@ -2174,7 +2150,7 @@ FTLinstall() {
disable_dnsmasq() {
# dnsmasq can now be stopped and disabled if it exists
if is_command dnsmasq; then
if which dnsmasq &> /dev/null; then
if check_service_active "dnsmasq";then
printf " %b FTL can now resolve DNS Queries without dnsmasq running separately\\n" "${INFO}"
stop_service dnsmasq
@@ -2287,7 +2263,7 @@ FTLcheckUpdate() {
printf " %b Checking for existing FTL binary...\\n" "${INFO}"
local ftlLoc
ftlLoc=$(command -v pihole-FTL 2>/dev/null)
ftlLoc=$(which pihole-FTL 2>/dev/null)
local ftlBranch
@@ -2304,7 +2280,7 @@ FTLcheckUpdate() {
local localSha1
# if dnsmasq exists and is running at this point, force reinstall of FTL Binary
if is_command dnsmasq; then
if which dnsmasq &> /dev/null; then
if check_service_active "dnsmasq";then
return 0
fi
@@ -2325,7 +2301,7 @@ FTLcheckUpdate() {
# We already have a pihole-FTL binary downloaded.
# Alt branches don't have a tagged version against them, so just confirm the checksum of the local vs remote to decide whether we download or not
remoteSha1=$(curl -sSL --fail "https://ftl.pi-hole.net/${ftlBranch}/${binary}.sha1" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(command -v pihole-FTL)" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(which pihole-FTL)" | cut -d ' ' -f 1)
if [[ "${remoteSha1}" != "${localSha1}" ]]; then
printf " %b Checksums do not match, downloading from ftl.pi-hole.net.\\n" "${INFO}"
@@ -2355,7 +2331,7 @@ FTLcheckUpdate() {
printf " %b Latest FTL Binary already installed (%s). Confirming Checksum...\\n" "${INFO}" "${FTLlatesttag}"
remoteSha1=$(curl -sSL --fail "https://github.com/pi-hole/FTL/releases/download/${FTLversion%$'\r'}/${binary}.sha1" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(command -v pihole-FTL)" | cut -d ' ' -f 1)
localSha1=$(sha1sum "$(which pihole-FTL)" | cut -d ' ' -f 1)
if [[ "${remoteSha1}" != "${localSha1}" ]]; then
printf " %b Corruption detected...\\n" "${INFO}"
@@ -2495,12 +2471,12 @@ main() {
get_available_interfaces
# Find interfaces and let the user choose one
chooseInterface
# find IPv4 and IPv6 information of the device
collect_v4andv6_information
# Decide what upstream DNS Servers to use
setDNS
# Give the user a choice of blocklists to include in their install. Or not.
chooseBlocklists
# find IPv4 and IPv6 information of the device
collect_v4andv6_information
# Let the user decide if they want the web interface to be installed automatically
setAdminFlag
# Let the user decide if they want query logging enabled...

View File

@@ -73,24 +73,19 @@ if [[ -r "${piholeDir}/pihole.conf" ]]; then
echo -e " ${COL_LIGHT_RED}Ignoring overrides specified within pihole.conf! ${COL_NC}"
fi
# Generate new SQLite3 file from schema template
# Generate new sqlite3 file from schema template
generate_gravity_database() {
if ! pihole-FTL sqlite3 "${gravityDBfile}" < "${gravityDBschema}"; then
echo -e " ${CROSS} Unable to create ${gravityDBfile}"
return 1
fi
chown pihole:pihole "${gravityDBfile}"
chmod g+w "${piholeDir}" "${gravityDBfile}"
sqlite3 "${1}" < "${gravityDBschema}"
}
# Copy data from old to new database file and swap them
gravity_swap_databases() {
local str copyGravity oldAvail
local str copyGravity
str="Building tree"
echo -ne " ${INFO} ${str}..."
# The index is intentionally not UNIQUE as poor quality adlists may contain domains more than once
output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 )
output=$( { sqlite3 "${gravityTEMPfile}" "CREATE INDEX idx_gravity ON gravity (domain, adlist_id);"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -102,6 +97,22 @@ gravity_swap_databases() {
str="Swapping databases"
echo -ne " ${INFO} ${str}..."
# Gravity copying SQL script
copyGravity="$(cat "${gravityDBcopy}")"
if [[ "${gravityDBfile}" != "${gravityDBfile_default}" ]]; then
# Replace default gravity script location by custom location
copyGravity="${copyGravity//"${gravityDBfile_default}"/"${gravityDBfile}"}"
fi
output=$( { sqlite3 "${gravityTEMPfile}" <<< "${copyGravity}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
# Swap databases and remove or conditionally rename old database
# Number of available blocks on disk
availableBlocks=$(stat -f --format "%a" "${gravityDIR}")
@@ -109,24 +120,18 @@ gravity_swap_databases() {
gravityBlocks=$(stat --format "%b" ${gravityDBfile})
# Only keep the old database if available disk space is at least twice the size of the existing gravity.db.
# Better be safe than sorry...
oldAvail=false
if [ "${availableBlocks}" -gt "$((gravityBlocks * 2))" ] && [ -f "${gravityDBfile}" ]; then
oldAvail=true
echo -e " ${TICK} The old database remains available."
mv "${gravityDBfile}" "${gravityOLDfile}"
else
rm "${gravityDBfile}"
fi
mv "${gravityTEMPfile}" "${gravityDBfile}"
echo -e "${OVER} ${TICK} ${str}"
if $oldAvail; then
echo -e " ${TICK} The old database remains available."
fi
}
# Update timestamp when the gravity table was last updated successfully
update_gravity_timestamp() {
output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\nINSERT OR REPLACE INTO info (property,value) values ('updated',cast(strftime('%%s', 'now') as int));" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -167,7 +172,7 @@ database_table_from_file() {
# Get MAX(id) from domainlist when INSERTing into this table
if [[ "${table}" == "domainlist" ]]; then
rowid="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")"
rowid="$(sqlite3 "${gravityDBfile}" "SELECT MAX(id) FROM domainlist;")"
if [[ -z "$rowid" ]]; then
rowid=0
fi
@@ -197,7 +202,7 @@ database_table_from_file() {
# Store domains in database table specified by ${table}
# Use printf as .mode and .import need to be on separate lines
# see https://unix.stackexchange.com/a/445615/83260
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" %s\\n" "${tmpFile}" "${table}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -217,7 +222,7 @@ database_table_from_file() {
# Update timestamp of last update of this list. We store this in the "old" database as all values in the new database will later be overwritten
database_adlist_updated() {
output=$( { printf ".timeout 30000\\nUPDATE adlist SET date_updated = (cast(strftime('%%s', 'now') as int)) WHERE id = %i;\\n" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\nUPDATE adlist SET date_updated = (cast(strftime('%%s', 'now') as int)) WHERE id = %i;\\n" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -228,7 +233,7 @@ database_adlist_updated() {
# Check if a column with name ${2} exists in gravity table with name ${1}
gravity_column_exists() {
output=$( { printf ".timeout 30000\\nSELECT EXISTS(SELECT * FROM pragma_table_info('%s') WHERE name='%s');\\n" "${1}" "${2}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\nSELECT EXISTS(SELECT * FROM pragma_table_info('%s') WHERE name='%s');\\n" "${1}" "${2}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
if [[ "${output}" == "1" ]]; then
return 0 # Bash 0 is success
fi
@@ -243,7 +248,7 @@ database_adlist_number() {
return;
fi
output=$( { printf ".timeout 30000\\nUPDATE adlist SET number = %i, invalid_domains = %i WHERE id = %i;\\n" "${num_source_lines}" "${num_invalid}" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\nUPDATE adlist SET number = %i, invalid_domains = %i WHERE id = %i;\\n" "${num_lines}" "${num_invalid}" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -259,7 +264,7 @@ database_adlist_status() {
return;
fi
output=$( { printf ".timeout 30000\\nUPDATE adlist SET status = %i WHERE id = %i;\\n" "${2}" "${1}" | pihole-FTL sqlite3 "${gravityDBfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\nUPDATE adlist SET status = %i WHERE id = %i;\\n" "${2}" "${1}" | sqlite3 "${gravityDBfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -274,10 +279,7 @@ migrate_to_database() {
if [ ! -e "${gravityDBfile}" ]; then
# Create new database file - note that this will be created in version 1
echo -e " ${INFO} Creating new gravity database"
if ! generate_gravity_database; then
echo -e " ${CROSS} Error creating new gravity database. Please contact support."
return 1
fi
generate_gravity_database "${gravityDBfile}"
# Check if gravity database needs to be updated
upgrade_gravityDB "${gravityDBfile}" "${piholeDir}"
@@ -376,9 +378,9 @@ gravity_DownloadBlocklists() {
fi
# Retrieve source URLs from gravity database
# We source only enabled adlists, SQLite3 stores boolean values as 0 (false) or 1 (true)
mapfile -t sources <<< "$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
mapfile -t sourceIDs <<< "$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)"
# We source only enabled adlists, sqlite3 stores boolean values as 0 (false) or 1 (true)
mapfile -t sources <<< "$(sqlite3 "${gravityDBfile}" "SELECT address FROM vw_adlist;" 2> /dev/null)"
mapfile -t sourceIDs <<< "$(sqlite3 "${gravityDBfile}" "SELECT id FROM vw_adlist;" 2> /dev/null)"
# Parse source domains from $sources
mapfile -t sourceDomains <<< "$(
@@ -392,12 +394,14 @@ gravity_DownloadBlocklists() {
)"
local str="Pulling blocklist source list into range"
echo -e "${OVER} ${TICK} ${str}"
if [[ -z "${sources[*]}" ]] || [[ -z "${sourceDomains[*]}" ]]; then
if [[ -n "${sources[*]}" ]] && [[ -n "${sourceDomains[*]}" ]]; then
echo -e "${OVER} ${TICK} ${str}"
else
echo -e "${OVER} ${CROSS} ${str}"
echo -e " ${INFO} No source list found, or it is empty"
echo ""
unset sources
return 1
fi
local url domain agent cmd_ext str target compression
@@ -407,7 +411,7 @@ gravity_DownloadBlocklists() {
str="Preparing new gravity database"
echo -ne " ${INFO} ${str}..."
rm "${gravityTEMPfile}" > /dev/null 2>&1
output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 )
output=$( { sqlite3 "${gravityTEMPfile}" < "${gravityDBschema}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -465,28 +469,9 @@ gravity_DownloadBlocklists() {
echo ""
done
str="Creating new gravity databases"
echo -ne " ${INFO} ${str}..."
# Gravity copying SQL script
copyGravity="$(cat "${gravityDBcopy}")"
if [[ "${gravityDBfile}" != "${gravityDBfile_default}" ]]; then
# Replace default gravity script location by custom location
copyGravity="${copyGravity//"${gravityDBfile_default}"/"${gravityDBfile}"}"
fi
output=$( { pihole-FTL sqlite3 "${gravityTEMPfile}" <<< "${copyGravity}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
echo -e "\\n ${CROSS} Unable to copy data from ${gravityDBfile} to ${gravityTEMPfile}\\n ${output}"
return 1
fi
echo -e "${OVER} ${TICK} ${str}"
str="Storing downloaded domains in new gravity database"
echo -ne " ${INFO} ${str}..."
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | pihole-FTL sqlite3 "${gravityTEMPfile}"; } 2>&1 )
output=$( { printf ".timeout 30000\\n.mode csv\\n.import \"%s\" gravity\\n" "${target}" | sqlite3 "${gravityTEMPfile}"; } 2>&1 )
status="$?"
if [[ "${status}" -ne 0 ]]; then
@@ -518,9 +503,8 @@ gravity_DownloadBlocklists() {
gravity_Blackbody=true
}
# num_target_lines does increase for every correctly added domain in pareseList()
num_target_lines=0
num_source_lines=0
total_num=0
num_lines=0
num_invalid=0
parseList() {
local adlistID="${1}" src="${2}" target="${3}" incorrect_lines
@@ -532,20 +516,18 @@ parseList() {
# Find (up to) five domains containing invalid characters (see above)
incorrect_lines="$(sed -e "/[^a-zA-Z0-9.\_-]/!d" "${src}" | head -n 5)"
local num_target_lines_new num_correct_lines
local num_target_lines num_correct_lines num_invalid
# Get number of lines in source file
num_source_lines="$(grep -c "^" "${src}")"
# Get the new number of lines in destination file
num_target_lines_new="$(grep -c "^" "${target}")"
# Number of new correctly added lines
num_correct_lines="$(( num_target_lines_new-num_target_lines ))"
# Upate number of lines in target file
num_target_lines="$num_target_lines_new"
num_invalid="$(( num_source_lines-num_correct_lines ))"
num_lines="$(grep -c "^" "${src}")"
# Get number of lines in destination file
num_target_lines="$(grep -c "^" "${target}")"
num_correct_lines="$(( num_target_lines-total_num ))"
total_num="$num_target_lines"
num_invalid="$(( num_lines-num_correct_lines ))"
if [[ "${num_invalid}" -eq 0 ]]; then
echo " ${INFO} Analyzed ${num_source_lines} domains"
echo " ${INFO} Analyzed ${num_lines} domains"
else
echo " ${INFO} Analyzed ${num_source_lines} domains, ${num_invalid} domains invalid!"
echo " ${INFO} Analyzed ${num_lines} domains, ${num_invalid} domains invalid!"
fi
# Display sample of invalid lines if we found some
@@ -611,10 +593,6 @@ gravity_DownloadBlocklistFromUrl() {
if [[ $(dig "${domain}" | grep "NXDOMAIN" -c) -ge 1 ]]; then
blocked=true
fi;;
"NODATA")
if [[ $(dig "${domain}" | grep "NOERROR" -c) -ge 1 ]] && [[ -z $(dig +short "${domain}") ]]; then
blocked=true
fi;;
"NULL"|*)
if [[ $(dig "${domain}" +short | grep "0.0.0.0" -c) -ge 1 ]]; then
blocked=true
@@ -708,7 +686,7 @@ gravity_DownloadBlocklistFromUrl() {
else
echo -e " ${CROSS} List download failed: ${COL_LIGHT_RED}no cached list available${COL_NC}"
# Manually reset these two numbers because we do not call parseList here
num_source_lines=0
num_lines=0
num_invalid=0
database_adlist_number "${adlistID}"
database_adlist_status "${adlistID}" "4"
@@ -791,12 +769,12 @@ gravity_Table_Count() {
local table="${1}"
local str="${2}"
local num
num="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")"
num="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(*) FROM ${table};")"
if [[ "${table}" == "vw_gravity" ]]; then
local unique
unique="$(pihole-FTL sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")"
unique="$(sqlite3 "${gravityDBfile}" "SELECT COUNT(DISTINCT domain) FROM ${table};")"
echo -e " ${INFO} Number of ${str}: ${num} (${COL_BOLD}${unique} unique domains${COL_NC})"
pihole-FTL sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});"
sqlite3 "${gravityDBfile}" "INSERT OR REPLACE INTO info (property,value) VALUES ('gravity_count',${unique});"
else
echo -e " ${INFO} Number of ${str}: ${num}"
fi
@@ -867,49 +845,6 @@ gravity_Cleanup() {
fi
}
database_recovery() {
local result
local str="Checking integrity of existing gravity database"
local option="${1}"
echo -ne " ${INFO} ${str}..."
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" "PRAGMA integrity_check" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - no errors found"
str="Checking foreign keys of existing gravity database"
echo -ne " ${INFO} ${str}..."
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" "PRAGMA foreign_key_check" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - no errors found"
if [[ "${option}" != "force" ]]; then
return
fi
else
echo -e "${OVER} ${CROSS} ${str} - errors found:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
fi
else
echo -e "${OVER} ${CROSS} ${str} - errors found:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
fi
str="Trying to recover existing gravity database"
echo -ne " ${INFO} ${str}..."
# We have to remove any possibly existing recovery database or this will fail
rm -f "${gravityDBfile}.recovered" > /dev/null 2>&1
if result="$(pihole-FTL sqlite3 "${gravityDBfile}" ".recover" | pihole-FTL sqlite3 "${gravityDBfile}.recovered" 2>&1)"; then
echo -e "${OVER} ${TICK} ${str} - success"
mv "${gravityDBfile}" "${gravityDBfile}.old"
mv "${gravityDBfile}.recovered" "${gravityDBfile}"
echo -ne " ${INFO} ${gravityDBfile} has been recovered"
echo -ne " ${INFO} The old ${gravityDBfile} has been moved to ${gravityDBfile}.old"
else
echo -e "${OVER} ${CROSS} ${str} - the following errors happened:"
while IFS= read -r line ; do echo " - $line"; done <<< "$result"
echo -e " ${CROSS} Recovery failed. Try \"pihole -r recreate\" instead."
exit 1
fi
echo ""
}
helpFunc() {
echo "Usage: pihole -g
Update domains from blocklists specified in adlists.list
@@ -920,37 +855,10 @@ Options:
exit 0
}
repairSelector() {
case "$1" in
"recover") recover_database=true;;
"recreate") recreate_database=true;;
*) echo "Usage: pihole -g -r {recover,recreate}
Attempt to repair gravity database
Available options:
pihole -g -r recover Try to recover a damaged gravity database file.
Pi-hole tries to restore as much as possible
from a corrupted gravity database.
pihole -g -r recover force Pi-hole will run the recovery process even when
no damage is detected. This option is meant to be
a last resort. Recovery is a fragile task
consuming a lot of resources and shouldn't be
performed unnecessarily.
pihole -g -r recreate Create a new gravity database file from scratch.
This will remove your existing gravity database
and create a new file from scratch. If you still
have the migration backup created when migrating
to Pi-hole v5.0, Pi-hole will import these files."
exit 0;;
esac
}
for var in "$@"; do
case "${var}" in
"-f" | "--force" ) forceDelete=true;;
"-r" | "--repair" ) repairSelector "$3";;
"-r" | "--recreate" ) recreate_database=true;;
"-h" | "--help" ) helpFunc;;
esac
done
@@ -964,7 +872,7 @@ fi
gravity_Trap
if [[ "${recreate_database:-}" == true ]]; then
str="Recreating gravity database from migration backup"
str="Restoring from migration backup"
echo -ne "${INFO} ${str}..."
rm "${gravityDBfile}"
pushd "${piholeDir}" > /dev/null || exit
@@ -973,15 +881,8 @@ if [[ "${recreate_database:-}" == true ]]; then
echo -e "${OVER} ${TICK} ${str}"
fi
if [[ "${recover_database:-}" == true ]]; then
database_recovery "$4"
fi
# Move possibly existing legacy files to the gravity database
if ! migrate_to_database; then
echo -e " ${CROSS} Unable to migrate to database. Please contact support."
exit 1
fi
migrate_to_database
if [[ "${forceDelete:-}" == true ]]; then
str="Deleting existing list cache"
@@ -992,21 +893,14 @@ if [[ "${forceDelete:-}" == true ]]; then
fi
# Gravity downloads blocklists next
if ! gravity_CheckDNSResolutionAvailable; then
echo -e " ${CROSS} Can not complete gravity update, no DNS is available. Please contact support."
exit 1
fi
gravity_CheckDNSResolutionAvailable
gravity_DownloadBlocklists
# Create local.list
gravity_generateLocalList
# Migrate rest of the data from old to new database
if ! gravity_swap_databases; then
echo -e " ${CROSS} Unable to create database. Please contact support."
exit 1
fi
gravity_swap_databases
# Update gravity timestamp
update_gravity_timestamp

View File

@@ -144,9 +144,7 @@ Command line arguments can be arbitrarily combined, e.g:
Start ftl in foreground with more verbose logging, process everything and shutdown immediately
.br
.SH "SEE ALSO"
\fBpihole\fR(8)
.br
\fBFor FTL's config options please see https://docs.pi-hole.net/ftldns/configfile/\fR
\fBpihole\fR(8), \fBpihole-FTL.conf\fR(5)
.br
.SH "COLOPHON"

313
manpages/pihole-FTL.conf.5 Normal file
View File

@@ -0,0 +1,313 @@
.TH "pihole-FTL.conf" "5" "pihole-FTL.conf" "pihole-FTL.conf" "November 2020"
.SH "NAME"
pihole-FTL.conf - FTL's config file
.br
.SH "DESCRIPTION"
/etc/pihole/pihole-FTL.conf will be read by \fBpihole-FTL(8)\fR on startup.
.br
For each setting the option shown first is the default.
.br
\fBBLOCKINGMODE=IP|IP-AAAA-NODATA|NODATA|NXDOMAIN|NULL\fR
.br
How should FTL reply to blocked queries?
IP - Pi-hole's IPs for blocked domains
IP-AAAA-NODATA - Pi-hole's IP + NODATA-IPv6 for blocked domains
NODATA - Using NODATA for blocked domains
NXDOMAIN - NXDOMAIN for blocked domains
NULL - Null IPs for blocked domains
.br
\fBCNAME_DEEP_INSPECT=true|false\fR
.br
Use this option to disable deep CNAME inspection. This might be beneficial for very low-end devices.
.br
\fBBLOCK_ESNI=true|false\fR
.br
Block requests to _esni.* sub-domains.
.br
\fBMAXLOGAGE=24.0\fR
.br
Up to how many hours of queries should be imported from the database and logs?
.br
Maximum is 744 (31 days)
.br
\fBPRIVACYLEVEL=0|1|2|3|4\fR
.br
Privacy level used to collect Pi-hole statistics.
.br
0 - show everything
.br
1 - hide domains
.br
2 - hide domains and clients
.br
3 - anonymous mode (hide everything)
.br
4 - disable all statistics
.br
\fBIGNORE_LOCALHOST=no|yes\fR
.br
Should FTL ignore queries coming from the local machine?
.br
\fBAAAA_QUERY_ANALYSIS=yes|no\fR
.br
Should FTL analyze AAAA queries?
.br
\fBANALYZE_ONLY_A_AND_AAAA=false|true\fR
.br
Should FTL only analyze A and AAAA queries?
.br
\fBSOCKET_LISTENING=localonly|all\fR
.br
Listen only for local socket connections on the API port or permit all connections.
.br
\fBFTLPORT=4711\fR
.br
On which port should FTL be listening?
.br
\fBRESOLVE_IPV6=yes|no\fR
.br
Should FTL try to resolve IPv6 addresses to hostnames?
.br
\fBRESOLVE_IPV4=yes|no\fR
.br
Should FTL try to resolve IPv4 addresses to hostnames?
.br
\fBDELAY_STARTUP=0\fR
.br
Time in seconds (between 0 and 300) to delay FTL startup.
.br
\fBNICE=-10\fR
.br
Set the niceness of the Pi-hole FTL process.
.br
Can be disabled altogether by setting a value of -999.
.br
\fBNAMES_FROM_NETDB=true|false\fR
.br
Control whether FTL should use a fallback option and try to obtain client names from checking the network table.
.br
E.g. IPv6 clients without a hostname will be compared via MAC address to known clients.
.br
\fB\fBREFRESH_HOSTNAMES=IPV4|ALL|NONE\fR
.br
Change how (and if) hourly PTR requests are made to check for changes in client and upstream server hostnames:
.br
IPV4 - Do the hourly PTR lookups only for IPv4 addresses resolving issues in networks with many short-lived PE IPv6 addresses.
.br
ALL - Do the hourly PTR lookups for all addresses. This can create a lot of PTR queries in networks with many IPv6 addresses.
.br
NONE - Don't do hourly PTR lookups. Look up hostnames once (when first seeing a client) and never again. Future hostname changes may be missed.
.br
\fBMAXNETAGE=365\fR
.br
IP addresses (and associated host names) older than the specified number of days are removed.
.br
This avoids dead entries in the network overview table.
.br
\fBEDNS0_ECS=true|false\fR
.br
Should we overwrite the query source when client information is provided through EDNS0 client subnet (ECS) information?
.br
\fBPARSE_ARP_CACHE=true|false\fR
.br
Parse ARP cache to fill network overview table.
.br
\fBDBIMPORT=yes|no\fR
.br
Should FTL load information from the database on startup to be aware of the most recent history?
.br
\fBMAXDBDAYS=365\fR
.br
How long should queries be stored in the database? Setting this to 0 disables the database
.br
\fBDBINTERVAL=1.0\fR
.br
How often do we store queries in FTL's database [minutes]?
.br
Accepts value between 0.1 (6 sec) and 1440 (1 day)
.br
\fBDBFILE=/etc/pihole/pihole-FTL.db\fR
.br
Specify path and filename of FTL's SQLite long-term database.
.br
Setting this to DBFILE= disables the database altogether
.br
\fBLOGFILE=/var/log/pihole-FTL.log\fR
.br
The location of FTL's log file.
.br
\fBPIDFILE=/run/pihole-FTL.pid\fR
.br
The file which contains the PID of FTL's main process.
.br
\fBPORTFILE=/run/pihole-FTL.port\fR
.br
Specify path and filename where the FTL process will write its API port number.
.br
\fBSOCKETFILE=/run/pihole/FTL.sock\fR
.br
The file containing the socket FTL's API is listening on.
.br
\fBSETUPVARSFILE=/etc/pihole/setupVars.conf\fR
.br
The config file of Pi-hole containing, e.g., the current blocking status (do not change).
.br
\fBMACVENDORDB=/etc/pihole/macvendor.db\fR
.br
The database containing MAC -> Vendor information for the network table.
.br
\fBGRAVITYDB=/etc/pihole/gravity.db\fR
.br
Specify path and filename of FTL's SQLite3 gravity database. This database contains all domains relevant for Pi-hole's DNS blocking.
.br
\fBDEBUG_ALL=false|true\fR
.br
Enable all debug flags. If this is set to true, all other debug config options are ignored.
.br
\fBDEBUG_DATABASE=false|true\fR
.br
Print debugging information about database actions such as SQL statements and performance.
.br
\fBDEBUG_NETWORKING=false|true\fR
.br
Prints a list of the detected network interfaces on the startup of FTL.
.br
\fBDEBUG_LOCKS=false|true\fR
.br
Print information about shared memory locks.
.br
Messages will be generated when waiting, obtaining, and releasing a lock.
.br
\fBDEBUG_QUERIES=false|true\fR
.br
Print extensive DNS query information (domains, types, replies, etc.).
.br
\fBDEBUG_FLAGS=false|true\fR
.br
Print flags of queries received by the DNS hooks.
.br
Only effective when \fBDEBUG_QUERIES\fR is enabled as well.
\fBDEBUG_SHMEM=false|true\fR
.br
Print information about shared memory buffers.
.br
Messages are either about creating or enlarging shmem objects or string injections.
.br
\fBDEBUG_GC=false|true\fR
.br
Print information about garbage collection (GC):
.br
What is to be removed, how many have been removed and how long did GC take.
.br
\fBDEBUG_ARP=false|true\fR
.br
Print information about ARP table processing:
.br
How long did parsing take, whether read MAC addresses are valid, and if the macvendor.db file exists.
.br
\fBDEBUG_REGEX=false|true\fR
.br
Controls if FTL should print extended details about regex matching.
.br
\fBDEBUG_API=false|true\fR
.br
Print extra debugging information during telnet API calls.
.br
Currently only used to send extra information when getting all queries.
.br
\fBDEBUG_OVERTIME=false|true\fR
.br
Print information about overTime memory operations, such as initializing or moving overTime slots.
.br
\fBDEBUG_EXTBLOCKED=false|true\fR
.br
Print information about why FTL decided that certain queries were recognized as being externally blocked.
.br
\fBDEBUG_CAPS=false|true\fR
.br
Print information about POSIX capabilities granted to the FTL process.
.br
The current capabilities are printed on receipt of SIGHUP i.e. after executing `killall -HUP pihole-FTL`.
.br
\fBDEBUG_DNSMASQ_LINES=false|true\fR
.br
Print file and line causing a dnsmasq event into FTL's log files.
.br
This is handy to implement additional hooks missing from FTL.
.br
\fBDEBUG_VECTORS=false|true\fR
.br
FTL uses dynamically allocated vectors for various tasks.
.br
This config option enables extensive debugging information such as information about allocation, referencing, deletion, and appending.
.br
\fBDEBUG_RESOLVER=false|true\fR
.br
Extensive information about hostname resolution like which DNS servers are used in the first and second hostname resolving tries.
.br
.SH "SEE ALSO"
\fBpihole\fR(8), \fBpihole-FTL\fR(8)
.br
.SH "COLOPHON"
Pi-hole : The Faster-Than-Light (FTL) Engine is a lightweight, purpose-built daemon used to provide statistics needed for the Pi-hole Web Interface, and its API can be easily integrated into your own projects. Although it is an optional component of the Pi-hole ecosystem, it will be installed by default to provide statistics. As the name implies, FTL does its work \fIvery quickly\fR!
.br
Get sucked into the latest news and community activity by entering Pi-hole's orbit. Information about Pi-hole, and the latest version of the software can be found at https://pi-hole.net
.br

62
pihole
View File

@@ -21,9 +21,6 @@ readonly FTL_PID_FILE="/run/pihole-FTL.pid"
readonly colfile="${PI_HOLE_SCRIPT_DIR}/COL_TABLE"
source "${colfile}"
readonly utilsfile="${PI_HOLE_SCRIPT_DIR}/utils.sh"
source "${utilsfile}"
webpageFunc() {
source "${PI_HOLE_SCRIPT_DIR}/webpage.sh"
main "$@"
@@ -74,7 +71,8 @@ reconfigurePiholeFunc() {
}
updateGravityFunc() {
exec "${PI_HOLE_SCRIPT_DIR}"/gravity.sh "$@"
"${PI_HOLE_SCRIPT_DIR}"/gravity.sh "$@"
exit $?
}
queryFunc() {
@@ -97,7 +95,8 @@ uninstallFunc() {
versionFunc() {
shift
exec "${PI_HOLE_SCRIPT_DIR}"/version.sh "$@"
"${PI_HOLE_SCRIPT_DIR}"/version.sh "$@"
exit 0
}
# Get PID of main pihole-FTL process
@@ -226,7 +225,8 @@ Time:
fi
local str="Pi-hole Disabled"
addOrEditKeyValPair "BLOCKING_ENABLED" "false" "${setupVars}"
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}"
echo "BLOCKING_ENABLED=false" >> "${setupVars}"
fi
else
# Enable Pi-hole
@@ -238,7 +238,8 @@ Time:
echo -e " ${INFO} Enabling blocking"
local str="Pi-hole Enabled"
addOrEditKeyValPair "BLOCKING_ENABLED" "true" "${setupVars}"
sed -i "/BLOCKING_ENABLED=/d" "${setupVars}"
echo "BLOCKING_ENABLED=true" >> "${setupVars}"
fi
restartDNS reload-lists
@@ -261,7 +262,7 @@ Options:
elif [[ "${1}" == "off" ]]; then
# Disable logging
sed -i 's/^log-queries/#log-queries/' /etc/dnsmasq.d/01-pihole.conf
addOrEditKeyValPair "QUERY_LOGGING" "false" "${setupVars}"
sed -i 's/^QUERY_LOGGING=true/QUERY_LOGGING=false/' /etc/pihole/setupVars.conf
if [[ "${2}" != "noflush" ]]; then
# Flush logs
"${PI_HOLE_BIN_DIR}"/pihole -f
@@ -271,7 +272,7 @@ Options:
elif [[ "${1}" == "on" ]]; then
# Enable logging
sed -i 's/^#log-queries/log-queries/' /etc/dnsmasq.d/01-pihole.conf
addOrEditKeyValPair "QUERY_LOGGING" "true" "${setupVars}"
sed -i 's/^QUERY_LOGGING=false/QUERY_LOGGING=true/' /etc/pihole/setupVars.conf
echo -e " ${INFO} Enabling logging..."
local str="Logging has been enabled!"
else
@@ -284,29 +285,27 @@ Options:
}
analyze_ports() {
local lv4 lv6 port=${1}
# FTL is listening at least on at least one port when this
# function is getting called
echo -e " ${TICK} DNS service is listening"
# Check individual address family/protocol combinations
# For a healthy Pi-hole, they should all be up (nothing printed)
lv4="$(ss --ipv4 --listening --numeric --tcp --udp src :${port})"
if grep -q "udp " <<< "${lv4}"; then
if grep -q "IPv4.*UDP" <<< "${1}"; then
echo -e " ${TICK} UDP (IPv4)"
else
echo -e " ${CROSS} UDP (IPv4)"
fi
if grep -q "tcp " <<< "${lv4}"; then
if grep -q "IPv4.*TCP" <<< "${1}"; then
echo -e " ${TICK} TCP (IPv4)"
else
echo -e " ${CROSS} TCP (IPv4)"
fi
lv6="$(ss --ipv6 --listening --numeric --tcp --udp src :${port})"
if grep -q "udp " <<< "${lv6}"; then
if grep -q "IPv6.*UDP" <<< "${1}"; then
echo -e " ${TICK} UDP (IPv6)"
else
echo -e " ${CROSS} UDP (IPv6)"
fi
if grep -q "tcp " <<< "${lv6}"; then
if grep -q "IPv6.*TCP" <<< "${1}"; then
echo -e " ${TICK} TCP (IPv6)"
else
echo -e " ${CROSS} TCP (IPv6)"
@@ -315,31 +314,19 @@ analyze_ports() {
}
statusFunc() {
# Determine if there is pihole-FTL service is listening
local listening pid port
pid="$(getFTLPID)"
if [[ "$pid" -eq "-1" ]]; then
case "${1}" in
"web") echo "-1";;
*) echo -e " ${CROSS} DNS service is NOT running";;
esac
return 0
# Determine if there is a pihole service is listening on port 53
local listening
listening="$(lsof -Pni:53)"
if grep -q "pihole" <<< "${listening}"; then
if [[ "${1}" != "web" ]]; then
analyze_ports "${listening}"
fi
else
#get the port pihole-FTL is listening on by using FTL's telnet API
port="$(echo ">dns-port >quit" | nc 127.0.0.1 4711)"
if [[ "${port}" == "0" ]]; then
case "${1}" in
"web") echo "-1";;
*) echo -e " ${CROSS} DNS service is NOT listening";;
esac
return 0
else
if [[ "${1}" != "web" ]]; then
echo -e " ${TICK} FTL is listening on port ${port}"
analyze_ports "${port}"
fi
fi
fi
# Determine if Pi-hole's blocking is enabled
@@ -352,19 +339,18 @@ statusFunc() {
elif grep -q "BLOCKING_ENABLED=true" /etc/pihole/setupVars.conf; then
# Configs are set
case "${1}" in
"web") echo "$port";;
"web") echo 1;;
*) echo -e " ${TICK} Pi-hole blocking is enabled";;
esac
else
# No configs were found
case "${1}" in
"web") echo -2;;
"web") echo 99;;
*) echo -e " ${INFO} Pi-hole blocking will be enabled";;
esac
# Enable blocking
"${PI_HOLE_BIN_DIR}"/pihole enable
fi
}
tailFunc() {

View File

@@ -1,5 +1,4 @@
FROM centos:7
RUN yum install -y git
ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole

View File

@@ -1,5 +1,4 @@
FROM quay.io/centos/centos:stream8
RUN yum install -y git
FROM centos:8
ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole

View File

@@ -1,5 +1,4 @@
FROM fedora:34
RUN dnf install -y git
FROM fedora:32
ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole

View File

@@ -1,5 +1,4 @@
FROM fedora:33
RUN dnf install -y git
ENV GITDIR /etc/.pihole
ENV SCRIPTDIR /opt/pihole

View File

@@ -1,9 +1,10 @@
import pytest
import testinfra
import testinfra.backend.docker
import subprocess
from textwrap import dedent
check_output = testinfra.get_backend(
"local://"
).get_module("Command").check_output
SETUPVARS = {
'PIHOLE_INTERFACE': 'eth99',
@@ -11,42 +12,85 @@ SETUPVARS = {
'PIHOLE_DNS_2': '4.2.2.2'
}
IMAGE = 'pytest_pihole:test_container'
tick_box = "[\x1b[1;32m\u2713\x1b[0m]"
cross_box = "[\x1b[1;31m\u2717\x1b[0m]"
info_box = "[i]"
# Monkeypatch sh to bash, if they ever support non hard code /bin/sh this can go away
# https://github.com/pytest-dev/pytest-testinfra/blob/master/testinfra/backend/docker.py
def run_bash(self, command, *args, **kwargs):
@pytest.fixture
def Pihole(Docker):
'''
used to contain some script stubbing, now pretty much an alias.
Also provides bash as the default run function shell
'''
def run_bash(self, command, *args, **kwargs):
cmd = self.get_command(command, *args)
if self.user is not None:
out = self.run_local(
"docker exec -u %s %s /bin/bash -c %s", self.user, self.name, cmd
)
"docker exec -u %s %s /bin/bash -c %s",
self.user, self.name, cmd)
else:
out = self.run_local("docker exec %s /bin/bash -c %s", self.name, cmd)
out = self.run_local(
"docker exec %s /bin/bash -c %s", self.name, cmd)
out.command = self.encode(cmd)
return out
testinfra.backend.docker.DockerBackend.run = run_bash
funcType = type(Docker.run)
Docker.run = funcType(run_bash, Docker)
return Docker
@pytest.fixture
def host():
# run a container
docker_id = subprocess.check_output(
['docker', 'run', '-t', '-d', '--cap-add=ALL', IMAGE]).decode().strip()
def Docker(request, args, image, cmd):
'''
combine our fixtures into a docker run command and setup finalizer to
cleanup
'''
assert 'docker' in check_output('id'), "Are you in the docker group?"
docker_run = "docker run {} {} {}".format(args, image, cmd)
docker_id = check_output(docker_run)
# return a testinfra connection to the container
docker_host = testinfra.get_host("docker://" + docker_id)
def teardown():
check_output("docker rm -f %s", docker_id)
request.addfinalizer(teardown)
yield docker_host
# at the end of the test suite, destroy the container
subprocess.check_call(['docker', 'rm', '-f', docker_id])
docker_container = testinfra.get_backend("docker://" + docker_id)
docker_container.id = docker_id
return docker_container
@pytest.fixture
def args(request):
'''
-t became required when tput began being used
'''
return '-t -d'
@pytest.fixture(params=[
'test_container'
])
def tag(request):
'''
consumed by image to make the test matrix
'''
return request.param
@pytest.fixture()
def image(request, tag):
'''
built by test_000_build_containers.py
'''
return 'pytest_pihole:{}'.format(tag)
@pytest.fixture()
def cmd(request):
'''
default to doing nothing by tailing null, but don't exit
'''
return 'tail -f /dev/null'
# Helper functions
@@ -56,7 +100,7 @@ def mock_command(script, args, container):
in unit tests
'''
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
mock_script = dedent('''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1" in'''.format(script=script))
@@ -77,75 +121,13 @@ def mock_command(script, args, container):
scriptlog=script))
def mock_command_passthrough(script, args, container):
'''
Per other mock_command* functions, allows intercepting of commands we don't want to run for real
in unit tests, however also allows only specific arguments to be mocked. Anything not defined will
be passed through to the actual command.
Example use-case: mocking `git pull` but still allowing `git clone` to work as intended
'''
orig_script_path = container.check_output('command -v {}'.format(script))
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1" in'''.format(script=script))
for k, v in args.items():
case = dedent('''
{arg})
echo {res}
exit {retcode}
;;'''.format(arg=k, res=v[0], retcode=v[1]))
mock_script += case
mock_script += dedent(r'''
*)
{orig_script_path} "\$@"
;;'''.format(orig_script_path=orig_script_path))
mock_script += dedent('''
esac''')
container.run('''
cat <<EOF> {script}\n{content}\nEOF
chmod +x {script}
rm -f /var/log/{scriptlog}'''.format(script=full_script_path,
content=mock_script,
scriptlog=script))
def mock_command_run(script, args, container):
'''
Allows for setup of commands we don't really want to have to run for real
in unit tests
'''
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1 \$2" in'''.format(script=script))
for k, v in args.items():
case = dedent('''
\"{arg}\")
echo {res}
exit {retcode}
;;'''.format(arg=k, res=v[0], retcode=v[1]))
mock_script += case
mock_script += dedent('''
esac''')
container.run('''
cat <<EOF> {script}\n{content}\nEOF
chmod +x {script}
rm -f /var/log/{scriptlog}'''.format(script=full_script_path,
content=mock_script,
scriptlog=script))
def mock_command_2(script, args, container):
'''
Allows for setup of commands we don't really want to have to run for real
in unit tests
'''
full_script_path = '/usr/local/bin/{}'.format(script)
mock_script = dedent(r'''\
mock_script = dedent('''\
#!/bin/bash -e
echo "\$0 \$@" >> /var/log/{script}
case "\$1 \$2" in'''.format(script=script))

View File

@@ -1,6 +1,6 @@
docker-compose
pytest
pytest-xdist
pytest-cov
pytest-testinfra
tox
docker-compose==1.23.2
pytest==4.3.0
pytest-xdist==1.26.1
pytest-cov==2.6.1
testinfra==1.19.0
tox==3.7.0

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +0,0 @@
def test_key_val_replacement_works(host):
''' Confirms addOrEditKeyValPair provides the expected output '''
host.run('''
setupvars=./testoutput
source /opt/pihole/utils.sh
addOrEditKeyValPair "KEY_ONE" "value1" "./testoutput"
addOrEditKeyValPair "KEY_TWO" "value2" "./testoutput"
addOrEditKeyValPair "KEY_ONE" "value3" "./testoutput"
addOrEditKeyValPair "KEY_FOUR" "value4" "./testoutput"
cat ./testoutput
''')
output = host.run('''
cat ./testoutput
''')
expected_stdout = 'KEY_ONE=value3\nKEY_TWO=value2\nKEY_FOUR=value4\n'
assert expected_stdout == output.stdout

View File

@@ -0,0 +1,638 @@
from textwrap import dedent
import re
from .conftest import (
SETUPVARS,
tick_box,
info_box,
cross_box,
mock_command,
mock_command_2,
run_script
)
def test_supported_package_manager(Pihole):
'''
confirm installer exits when no supported package manager found
'''
# break supported package managers
Pihole.run('rm -rf /usr/bin/apt-get')
Pihole.run('rm -rf /usr/bin/rpm')
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
''')
expected_stdout = cross_box + ' No supported package manager found'
assert expected_stdout in package_manager_detect.stdout
# assert package_manager_detect.rc == 1
def test_setupVars_are_sourced_to_global_scope(Pihole):
'''
currently update_dialogs sources setupVars with a dot,
then various other functions use the variables.
This confirms the sourced variables are in scope between functions
'''
setup_var_file = 'cat <<EOF> /etc/pihole/setupVars.conf\n'
for k, v in SETUPVARS.items():
setup_var_file += "{}={}\n".format(k, v)
setup_var_file += "EOF\n"
Pihole.run(setup_var_file)
script = dedent('''\
set -e
printSetupVars() {
# Currently debug test function only
echo "Outputting sourced variables"
echo "PIHOLE_INTERFACE=${PIHOLE_INTERFACE}"
echo "PIHOLE_DNS_1=${PIHOLE_DNS_1}"
echo "PIHOLE_DNS_2=${PIHOLE_DNS_2}"
}
update_dialogs() {
. /etc/pihole/setupVars.conf
}
update_dialogs
printSetupVars
''')
output = run_script(Pihole, script).stdout
for k, v in SETUPVARS.items():
assert "{}={}".format(k, v) in output
def test_setupVars_saved_to_file(Pihole):
'''
confirm saved settings are written to a file for future updates to re-use
'''
# dedent works better with this and padding matching script below
set_setup_vars = '\n'
for k, v in SETUPVARS.items():
set_setup_vars += " {}={}\n".format(k, v)
Pihole.run(set_setup_vars).stdout
script = dedent('''\
set -e
echo start
TERM=xterm
source /opt/pihole/basic-install.sh
{}
mkdir -p /etc/dnsmasq.d
version_check_dnsmasq
echo "" > /etc/pihole/pihole-FTL.conf
finalExports
cat /etc/pihole/setupVars.conf
'''.format(set_setup_vars))
output = run_script(Pihole, script).stdout
for k, v in SETUPVARS.items():
assert "{}={}".format(k, v) in output
def test_selinux_not_detected(Pihole):
'''
confirms installer continues when SELinux configuration file does not exist
'''
check_selinux = Pihole.run('''
rm -f /etc/selinux/config
source /opt/pihole/basic-install.sh
checkSelinux
''')
expected_stdout = info_box + ' SELinux not detected'
assert expected_stdout in check_selinux.stdout
assert check_selinux.rc == 0
def test_installPiholeWeb_fresh_install_no_errors(Pihole):
'''
confirms all web page assets from Core repo are installed on a fresh build
'''
installWeb = Pihole.run('''
source /opt/pihole/basic-install.sh
installPiholeWeb
''')
expected_stdout = info_box + ' Installing 404 page...'
assert expected_stdout in installWeb.stdout
expected_stdout = tick_box + (' Creating directory for 404 page, '
'and copying files')
assert expected_stdout in installWeb.stdout
expected_stdout = info_box + ' Backing up index.lighttpd.html'
assert expected_stdout in installWeb.stdout
expected_stdout = ('No default index.lighttpd.html file found... '
'not backing up')
assert expected_stdout in installWeb.stdout
expected_stdout = tick_box + ' Installing sudoer file'
assert expected_stdout in installWeb.stdout
web_directory = Pihole.run('ls -r /var/www/html/pihole').stdout
assert 'index.php' in web_directory
def test_update_package_cache_success_no_errors(Pihole):
'''
confirms package cache was updated without any errors
'''
updateCache = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
update_package_cache
''')
expected_stdout = tick_box + ' Update local cache of available packages'
assert expected_stdout in updateCache.stdout
assert 'error' not in updateCache.stdout.lower()
def test_update_package_cache_failure_no_errors(Pihole):
'''
confirms package cache was not updated
'''
mock_command('apt-get', {'update': ('', '1')}, Pihole)
updateCache = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
update_package_cache
''')
expected_stdout = cross_box + ' Update local cache of available packages'
assert expected_stdout in updateCache.stdout
assert 'Error: Unable to update package cache.' in updateCache.stdout
def test_FTL_detect_aarch64_no_errors(Pihole):
'''
confirms only aarch64 package is downloaded for FTL engine
'''
# mock uname to return aarch64 platform
mock_command('uname', {'-m': ('aarch64', '0')}, Pihole)
# mock ldd to respond with aarch64 shared library
mock_command(
'ldd',
{
'/bin/ls': (
'/lib/ld-linux-aarch64.so.1',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected AArch64 (64 Bit ARM) processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv4t_no_errors(Pihole):
'''
confirms only armv4t package is downloaded for FTL engine
'''
# mock uname to return armv4t platform
mock_command('uname', {'-m': ('armv4t', '0')}, Pihole)
# mock ldd to respond with ld-linux shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv4 processor')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv5te_no_errors(Pihole):
'''
confirms only armv5te package is downloaded for FTL engine
'''
# mock uname to return armv5te platform
mock_command('uname', {'-m': ('armv5te', '0')}, Pihole)
# mock ldd to respond with ld-linux shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv5 (or newer) processor')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv6l_no_errors(Pihole):
'''
confirms only armv6l package is downloaded for FTL engine
'''
# mock uname to return armv6l platform
mock_command('uname', {'-m': ('armv6l', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv6 processor '
'(with hard-float support)')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv7l_no_errors(Pihole):
'''
confirms only armv7l package is downloaded for FTL engine
'''
# mock uname to return armv7l platform
mock_command('uname', {'-m': ('armv7l', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + (' Detected ARMv7 processor '
'(with hard-float support)')
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_armv8a_no_errors(Pihole):
'''
confirms only armv8a package is downloaded for FTL engine
'''
# mock uname to return armv8a platform
mock_command('uname', {'-m': ('armv8a', '0')}, Pihole)
# mock ldd to respond with ld-linux-armhf shared library
mock_command('ldd', {'/bin/ls': ('/lib/ld-linux-armhf.so.3', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected ARMv8 (or newer) processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_x86_64_no_errors(Pihole):
'''
confirms only x86_64 package is downloaded for FTL engine
'''
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = info_box + ' FTL Checks...'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Detected x86_64 processor'
assert expected_stdout in detectPlatform.stdout
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in detectPlatform.stdout
def test_FTL_detect_unknown_no_errors(Pihole):
''' confirms only generic package is downloaded for FTL engine '''
# mock uname to return generic platform
mock_command('uname', {'-m': ('mips', '0')}, Pihole)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
''')
expected_stdout = 'Not able to detect processor (unknown: mips)'
assert expected_stdout in detectPlatform.stdout
def test_FTL_download_aarch64_no_errors(Pihole):
'''
confirms only aarch64 package is downloaded for FTL engine
'''
# mock whiptail answers and ensure installer dependencies
mock_command('whiptail', {'*': ('', '0')}, Pihole)
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
download_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
FTLinstall "pihole-FTL-aarch64-linux-gnu"
''')
expected_stdout = tick_box + ' Downloading and Installing FTL'
assert expected_stdout in download_binary.stdout
assert 'error' not in download_binary.stdout.lower()
def test_FTL_binary_installed_and_responsive_no_errors(Pihole):
'''
confirms FTL binary is copied and functional in installed location
'''
installed_binary = Pihole.run('''
source /opt/pihole/basic-install.sh
create_pihole_user
funcOutput=$(get_binary_name)
binary="pihole-FTL${funcOutput##*pihole-FTL}"
theRest="${funcOutput%pihole-FTL*}"
FTLdetect "${binary}" "${theRest}"
pihole-FTL version
''')
expected_stdout = 'v'
assert expected_stdout in installed_binary.stdout
# def test_FTL_support_files_installed(Pihole):
# '''
# confirms FTL support files are installed
# '''
# support_files = Pihole.run('''
# source /opt/pihole/basic-install.sh
# FTLdetect
# stat -c '%a %n' /var/log/pihole-FTL.log
# stat -c '%a %n' /run/pihole-FTL.port
# stat -c '%a %n' /run/pihole-FTL.pid
# ls -lac /run
# ''')
# assert '644 /run/pihole-FTL.port' in support_files.stdout
# assert '644 /run/pihole-FTL.pid' in support_files.stdout
# assert '644 /var/log/pihole-FTL.log' in support_files.stdout
def test_IPv6_only_link_local(Pihole):
'''
confirms IPv6 blocking is disabled for Link-local address
'''
# mock ip -6 address to return Link-local address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fe80::d210:52fa:fe00:7ad7/64 scope link',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = ('Unable to find IPv6 ULA/GUA address')
assert expected_stdout in detectPlatform.stdout
def test_IPv6_only_ULA(Pihole):
'''
confirms IPv6 blocking is enabled for ULA addresses
'''
# mock ip -6 address to return ULA address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_only_GUA(Pihole):
'''
confirms IPv6 blocking is enabled for GUA addresses
'''
# mock ip -6 address to return GUA address
mock_command_2(
'ip',
{
'-6 address': (
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 GUA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_GUA_ULA_test(Pihole):
'''
confirms IPv6 blocking is enabled for GUA and ULA addresses
'''
# mock ip -6 address to return GUA and ULA addresses
mock_command_2(
'ip',
{
'-6 address': (
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global\n'
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_IPv6_ULA_GUA_test(Pihole):
'''
confirms IPv6 blocking is enabled for GUA and ULA addresses
'''
# mock ip -6 address to return ULA and GUA addresses
mock_command_2(
'ip',
{
'-6 address': (
'inet6 fda2:2001:5555:0:d210:52fa:fe00:7ad7/64 scope global\n'
'inet6 2003:12:1e43:301:d210:52fa:fe00:7ad7/64 scope global',
'0'
)
},
Pihole
)
detectPlatform = Pihole.run('''
source /opt/pihole/basic-install.sh
find_IPv6_information
''')
expected_stdout = 'Found IPv6 ULA address'
assert expected_stdout in detectPlatform.stdout
def test_validate_ip(Pihole):
'''
Tests valid_ip for various IP addresses
'''
def test_address(addr, success=True):
output = Pihole.run('''
source /opt/pihole/basic-install.sh
valid_ip "{addr}"
'''.format(addr=addr))
assert output.rc == 0 if success else 1
test_address('192.168.1.1')
test_address('127.0.0.1')
test_address('255.255.255.255')
test_address('255.255.255.256', False)
test_address('255.255.256.255', False)
test_address('255.256.255.255', False)
test_address('256.255.255.255', False)
test_address('1092.168.1.1', False)
test_address('not an IP', False)
test_address('8.8.8.8#', False)
test_address('8.8.8.8#0')
test_address('8.8.8.8#1')
test_address('8.8.8.8#42')
test_address('8.8.8.8#888')
test_address('8.8.8.8#1337')
test_address('8.8.8.8#65535')
test_address('8.8.8.8#65536', False)
test_address('8.8.8.8#-1', False)
test_address('00.0.0.0', False)
test_address('010.0.0.0', False)
test_address('001.0.0.0', False)
test_address('0.0.0.0#00', False)
test_address('0.0.0.0#01', False)
test_address('0.0.0.0#001', False)
test_address('0.0.0.0#0001', False)
test_address('0.0.0.0#00001', False)
def test_os_check_fails(Pihole):
''' Confirms install fails on unsupported OS '''
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${OS_CHECK_DEPS[@]}
install_dependent_packages ${INSTALLER_DEPS[@]}
cat <<EOT > /etc/os-release
ID=UnsupportedOS
VERSION_ID="2"
EOT
''')
detectOS = Pihole.run('''t
source /opt/pihole/basic-install.sh
os_check
''')
expected_stdout = 'Unsupported OS detected: UnsupportedOS'
assert expected_stdout in detectOS.stdout
def test_os_check_passes(Pihole):
''' Confirms OS meets the requirements '''
Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${OS_CHECK_DEPS[@]}
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
detectOS = Pihole.run('''
source /opt/pihole/basic-install.sh
os_check
''')
expected_stdout = 'Supported OS detected'
assert expected_stdout in detectOS.stdout
def test_package_manager_has_installer_deps(Pihole):
''' Confirms OS is able to install the required packages for the installer'''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
install_dependent_packages ${INSTALLER_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0
def test_package_manager_has_pihole_deps(Pihole):
''' Confirms OS is able to install the required packages for Pi-hole '''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
install_dependent_packages ${PIHOLE_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0
def test_package_manager_has_web_deps(Pihole):
''' Confirms OS is able to install the required packages for web '''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
output = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
install_dependent_packages ${PIHOLE_WEB_DEPS[@]}
''')
assert 'No package' not in output.stdout # centos7 still exits 0...
assert output.rc == 0

View File

@@ -5,11 +5,11 @@ from .conftest import (
)
def test_php_upgrade_default_optout_centos_eq_7(host):
def test_php_upgrade_default_optout_centos_eq_7(Pihole):
'''
confirms the default behavior to opt-out of installing PHP7 from REMI
'''
package_manager_detect = host.run('''
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -17,18 +17,18 @@ def test_php_upgrade_default_optout_centos_eq_7(host):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_upgrade_user_optout_centos_eq_7(host):
def test_php_upgrade_user_optout_centos_eq_7(Pihole):
'''
confirms installer behavior when user opt-out of installing PHP7 from REMI
(php not currently installed)
'''
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '1')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -36,18 +36,18 @@ def test_php_upgrade_user_optout_centos_eq_7(host):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_upgrade_user_optin_centos_eq_7(host):
def test_php_upgrade_user_optin_centos_eq_7(Pihole):
'''
confirms installer behavior when user opt-in to installing PHP7 from REMI
(php not currently installed)
'''
# Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -59,5 +59,5 @@ def test_php_upgrade_user_optin_centos_eq_7(host):
expected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert remi_package.is_installed

View File

@@ -5,12 +5,12 @@ from .conftest import (
)
def test_php_upgrade_default_continue_centos_gte_8(host):
def test_php_upgrade_default_continue_centos_gte_8(Pihole):
'''
confirms the latest version of CentOS continues / does not optout
(should trigger on CentOS7 only)
'''
package_manager_detect = host.run('''
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -19,19 +19,19 @@ def test_php_upgrade_default_continue_centos_gte_8(host):
' Deprecated PHP may be in use.')
assert unexpected_stdout not in package_manager_detect.stdout
# ensure remi was not installed on latest CentOS
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_upgrade_user_optout_skipped_centos_gte_8(host):
def test_php_upgrade_user_optout_skipped_centos_gte_8(Pihole):
'''
confirms installer skips user opt-out of installing PHP7 from REMI on
latest CentOS (should trigger on CentOS7 only)
(php not currently installed)
'''
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '1')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -40,19 +40,19 @@ def test_php_upgrade_user_optout_skipped_centos_gte_8(host):
' Deprecated PHP may be in use.')
assert unexpected_stdout not in package_manager_detect.stdout
# ensure remi was not installed on latest CentOS
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_upgrade_user_optin_skipped_centos_gte_8(host):
def test_php_upgrade_user_optin_skipped_centos_gte_8(Pihole):
'''
confirms installer skips user opt-in to installing PHP7 from REMI on
latest CentOS (should trigger on CentOS7 only)
(php not currently installed)
'''
# Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -64,5 +64,5 @@ def test_php_upgrade_user_optin_skipped_centos_gte_8(host):
unexpected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7')
assert unexpected_stdout not in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed

View File

@@ -7,13 +7,13 @@ from .conftest import (
)
def test_release_supported_version_check_centos(host):
def test_release_supported_version_check_centos(Pihole):
'''
confirms installer exits on unsupported releases of CentOS
'''
# modify /etc/redhat-release to mock an unsupported CentOS release
host.run('echo "CentOS Linux release 6.9" > /etc/redhat-release')
package_manager_detect = host.run('''
Pihole.run('echo "CentOS Linux release 6.9" > /etc/redhat-release')
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -24,11 +24,11 @@ def test_release_supported_version_check_centos(host):
assert expected_stdout in package_manager_detect.stdout
def test_enable_epel_repository_centos(host):
def test_enable_epel_repository_centos(Pihole):
'''
confirms the EPEL package repository is enabled when installed on CentOS
'''
package_manager_detect = host.run('''
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -38,22 +38,22 @@ def test_enable_epel_repository_centos(host):
assert expected_stdout in package_manager_detect.stdout
expected_stdout = tick_box + ' Installed epel-release'
assert expected_stdout in package_manager_detect.stdout
epel_package = host.package('epel-release')
epel_package = Pihole.package('epel-release')
assert epel_package.is_installed
def test_php_version_lt_7_detected_upgrade_default_optout_centos(host):
def test_php_version_lt_7_detected_upgrade_default_optout_centos(Pihole):
'''
confirms the default behavior to opt-out of upgrading to PHP7 from REMI
'''
# first we will install the default php version to test installer behavior
php_install = host.run('yum install -y php')
php_install = Pihole.run('yum install -y php')
assert php_install.rc == 0
php_package = host.package('php')
php_package = Pihole.package('php')
default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7")
package_manager_detect = host.run('''
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -61,24 +61,24 @@ def test_php_version_lt_7_detected_upgrade_default_optout_centos(host):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_version_lt_7_detected_upgrade_user_optout_centos(host):
def test_php_version_lt_7_detected_upgrade_user_optout_centos(Pihole):
'''
confirms installer behavior when user opt-out to upgrade to PHP7 via REMI
'''
# first we will install the default php version to test installer behavior
php_install = host.run('yum install -y php')
php_install = Pihole.run('yum install -y php')
assert php_install.rc == 0
php_package = host.package('php')
php_package = Pihole.package('php')
default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7")
# Whiptail dialog returns Cancel for user prompt
mock_command('whiptail', {'*': ('', '1')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '1')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -86,24 +86,24 @@ def test_php_version_lt_7_detected_upgrade_user_optout_centos(host):
expected_stdout = info_box + (' User opt-out of PHP 7 upgrade on CentOS. '
'Deprecated PHP may be in use.')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed
def test_php_version_lt_7_detected_upgrade_user_optin_centos(host):
def test_php_version_lt_7_detected_upgrade_user_optin_centos(Pihole):
'''
confirms installer behavior when user opt-in to upgrade to PHP7 via REMI
'''
# first we will install the default php version to test installer behavior
php_install = host.run('yum install -y php')
php_install = Pihole.run('yum install -y php')
assert php_install.rc == 0
php_package = host.package('php')
php_package = Pihole.package('php')
default_centos_php_version = php_package.version.split('.')[0]
if int(default_centos_php_version) >= 7: # PHP7 is supported/recommended
pytest.skip("Test deprecated . Detected default PHP version >= 7")
# Whiptail dialog returns Continue for user prompt
mock_command('whiptail', {'*': ('', '0')}, host)
package_manager_detect = host.run('''
mock_command('whiptail', {'*': ('', '0')}, Pihole)
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
@@ -118,8 +118,8 @@ def test_php_version_lt_7_detected_upgrade_user_optin_centos(host):
expected_stdout = tick_box + (' Remi\'s RPM repository has '
'been enabled for PHP7')
assert expected_stdout in package_manager_detect.stdout
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert remi_package.is_installed
updated_php_package = host.package('php')
updated_php_package = Pihole.package('php')
updated_php_version = updated_php_package.version.split('.')[0]
assert int(updated_php_version) == 7

View File

@@ -5,7 +5,7 @@ from .conftest import (
)
def mock_selinux_config(state, host):
def mock_selinux_config(state, Pihole):
'''
Creates a mock SELinux config file with expected content
'''
@@ -13,20 +13,20 @@ def mock_selinux_config(state, host):
valid_states = ['enforcing', 'permissive', 'disabled']
assert state in valid_states
# getenforce returns the running state of SELinux
mock_command('getenforce', {'*': (state.capitalize(), '0')}, host)
mock_command('getenforce', {'*': (state.capitalize(), '0')}, Pihole)
# create mock configuration with desired content
host.run('''
Pihole.run('''
mkdir /etc/selinux
echo "SELINUX={state}" > /etc/selinux/config
'''.format(state=state.lower()))
def test_selinux_enforcing_exit(host):
def test_selinux_enforcing_exit(Pihole):
'''
confirms installer prompts to exit when SELinux is Enforcing by default
'''
mock_selinux_config("enforcing", host)
check_selinux = host.run('''
mock_selinux_config("enforcing", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
@@ -37,12 +37,12 @@ def test_selinux_enforcing_exit(host):
assert check_selinux.rc == 1
def test_selinux_permissive(host):
def test_selinux_permissive(Pihole):
'''
confirms installer continues when SELinux is Permissive
'''
mock_selinux_config("permissive", host)
check_selinux = host.run('''
mock_selinux_config("permissive", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')
@@ -51,12 +51,12 @@ def test_selinux_permissive(host):
assert check_selinux.rc == 0
def test_selinux_disabled(host):
def test_selinux_disabled(Pihole):
'''
confirms installer continues when SELinux is Disabled
'''
mock_selinux_config("disabled", host)
check_selinux = host.run('''
mock_selinux_config("disabled", Pihole)
check_selinux = Pihole.run('''
source /opt/pihole/basic-install.sh
checkSelinux
''')

View File

@@ -1,16 +1,16 @@
def test_epel_and_remi_not_installed_fedora(host):
def test_epel_and_remi_not_installed_fedora(Pihole):
'''
confirms installer does not attempt to install EPEL/REMI repositories
on Fedora
'''
package_manager_detect = host.run('''
package_manager_detect = Pihole.run('''
source /opt/pihole/basic-install.sh
package_manager_detect
select_rpm_php
''')
assert package_manager_detect.stdout == ''
epel_package = host.package('epel-release')
epel_package = Pihole.package('epel-release')
assert not epel_package.is_installed
remi_package = host.package('remi-release')
remi_package = Pihole.package('remi-release')
assert not remi_package.is_installed

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _centos_7.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_7_support.py
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_7_support.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _centos_8.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_8_support.py
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_centos_common_support.py ./test_centos_8_support.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _debian_10.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _debian_11.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _debian_9.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

8
test/tox.fedora_32.ini Normal file
View File

@@ -0,0 +1,8 @@
[tox]
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _fedora_32.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_fedora_support.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _fedora_33.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_fedora_support.py
pytest {posargs:-vv -n auto} ./test_automated_install.py ./test_centos_fedora_common_support.py ./test_fedora_support.py

View File

@@ -1,8 +0,0 @@
[tox]
envlist = py38
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _fedora_34.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py ./test_fedora_support.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _ubuntu_16.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _ubuntu_18.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _ubuntu_20.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py

View File

@@ -1,8 +1,8 @@
[tox]
envlist = py38
envlist = py37
[testenv]
whitelist_externals = docker
deps = -rrequirements.txt
commands = docker build -f _ubuntu_21.Dockerfile -t pytest_pihole:test_container ../
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
pytest {posargs:-vv -n auto} ./test_automated_install.py