Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won

  • Feedback

  • BF$

    134 [ Donate ]

buzzkillb last won the day on May 29

buzzkillb had the most liked content!

Community Reputation

278 Excellent


About buzzkillb

  • Rank
    Senior Denarian

Personal Information

  • D Address

Recent Profile Visitors

6451 profile views
  1. How to whitelist and blacklist domains using wildcards to pick up their subdomain using https://pi-hole.net/ An example blacklisting a basic twitter.com and subdomains. And you get this. also blocking giphy for their GIF tracking. Whitelist blockforums.org And you get this
  2. Borderlands: The Handsome Collection is free until June 4, 2020. https://www.epicgames.com/store/en-US/bundles/borderlands-the-handsome-collection
  3. With the release of the Raspberry Pi4 8gb version, and SSD booting, this could be still useful for anything below 8gb of ram. Git clone log2ram repo git clone https://github.com/azlux/log2ram.git go into folder and run install.sh cd log2ram sudo ./install.sh Modify the amount of ram to use. 40M is stock and another place says try 128M, up to you. sudo nano /etc/log2ram.conf Reboot the PI sudo reboot Check the log2ram changes were persistent. df -h That was an easy tweak.
  4. Figured I would post a github up for this and a docker-compose.yml sample. Building locally on a pi4 4gb to test, and was curious if armv7 would compile. Going to build the x64 dockerhub first, and maybe do an arm dockerhub build. Keeps its very generic, and also smallest alpine build I could think of, though no python/cloudflare option yet for mine. Below example of image size, Ubuntu and Debian base image was becoming enormous. TOR is not install, but the full command line does work, and configured inside the docker-compose. REPOSITORY TAG IMAGE ID CREATED SIZE seeder 1.0 af13223bc05b 41 minutes ago 8.18MB https://github.com/buzzkillb/docker-generic-seeder In action from docker-compose.
  5. For reference, a clean Raspbian Lite buster uses this amount of ram after all of the above.
  6. About time to break out the Raspberry Pi4 4gb again and play around in docker with docker-compose. Install Docker curl -sSL https://get.docker.com | sh Change permissions for your user account, probably pi sudo usermod -aG docker pi reboot or logout so you don't have to sudo for docker commands and then test that worked docker run hello-world Now install docker-compose sudo apt-get install -y libffi-dev libssl-dev sudo apt-get install -y python3 python3-pip sudo apt-get remove python-configparser sudo pip3 install docker-compose Test docker-compose worked docker-compose
  7. Trying out posting staking stats using only https://denarius.pro screenshots. Probably will switch this up to include most everything else and get a little more in depth. Staking has a pretty high weight the last few days. https://medium.com/@realcryptobuzzb/d-staking-stats-may-27-2020-5fed4523e6b8?sk=ba5378d9857563edf67c1d7e6d8e75ea
  8. (https://dns.watch/) was recommended by another user. Currently running that dns through pihole on a pi zero w.
  9. When I was running the snap denarius daemon it ran very smooth on the 4gb pi4, Ubuntu headless then installed a DE to VNC into. So the official POE hat fits in that case??
  10. Free Civilization VI on Epic Games until May 28. https://www.epicgames.com/store/en-US/product/sid-meiers-civilization-vi/home
  11. linode.com is another option some others like
  12. Buyvm.net doesn't appear to oversell, but a bit more pricey than scaleway.
  13. Looking at what to do next myself. The USG that I have, kind of sucks since I want QOS now and that will take the poor little square to its knees. With the vlan setup being somewhat easy to setup on these, I would like to stay in the unifi universe. So UDM Pro, USG Pro, or wait for whatever they are releasing next. Otherwise I might just go pfsense since I can build around what I am looking for. Unifi though is just so sweet with how its setup, too bad their lower end hardware can't do simple things like QOS on high speed connections.
  14. To install Denarius snap QT / daemon in Qubes OS. reference: https://www.qubes-os.org/doc/software-update-domu/ "Installing Snap Packages" Open up one of the template VM terminals like Fedora Template VM. sudo dnf install snapd qubes-snapd-helper Then shutdown the TemplateVM. sudo shutdown -h now Once this is done, create a new AppVM based on that TemplateVM. I used these settings. Name and Label: Crypto Color: Purple Type: Qube based on a template (AppVM) Template: default (fedora-30) Networking: sys-whonix launch settings after creation Then gave it 12gb of storage in the settings. Start a terminal in the new AppVM. snap install denarius After that's done go back to your Qube Settings -> Applications -> Click Refresh Applications, click Denarius then the single arrow to move it so selected, click OK. You can run Denarius Snap QT from the menu, or go to terminal and run denarius.daemon for the daemon. A nice feature is that you can run under whatever firewall rules you want. In this example using sys-whonix I am now through TOR completely.
  15. This is relatively easy and I am going to setup a clean Ubuntu 18.04 VM to show how I am doing this. The idea is how to grab any stats we want and throw them into a pretty Grafana dashboard, and I think grabbing coingecko API is a great example of how to walkthrough the whole process. In the end, you could face this to the internet and only allow non signed in users to view and not edit the site. I would either try following this by downloading ubuntu 18.04 for a local VM in free VMplayer, a local Ubuntu on a laptop, or get a cheap 1gb VPS to play on. Because I am going to use Docker you can't really break much on the OS itself. Should work on 20.04, but its still a bit new for now. https://releases.ubuntu.com/18.04.4/ My github for the scripts as I add them. https://github.com/buzzkillb/snakes-on-a-chain The idea is maybe another person will see this and tweak things to be more simple and generic. First update and upgrade everything on the stock ubuntu sudo apt update sudo apt upgrade Next install Docker https://docs.docker.com/engine/install/ubuntu/ The important thing once you are done is to give permission to your new user to use the docker sudo usermod -aG docker your-user I am using user denarius for this sudo usermod -aG docker denarius Close your terminal, open up a new one and check it works. Might even need to restart. We don't want to be running this with sudo everytime. docker run hello-world When it works it will pull the container from dockerhub and show you a cute Hello World message. Lets get Grafana setup, which is just as easy. First we create a data folder to store some local stuff from the container, and also use ID for our current user. mkdir data ID=$(id -u) Run the Docker command docker run -d \ -p 3000:3000 \ --name=grafana \ --user $ID \ --volume "$PWD/data:/var/lib/grafana" \ -e "GF_INSTALL_PLUGINS=grafana-worldmap-panel" \ -e "GF_USERS_VIEWERS_CAN_EDIT=false" \ -e "GF_USERS_EDITORS_CAN_ADMIN=false" \ -e "GF_USERS_ALLOW_SIGN_UP=false" \ -e "GF_USERS_ALLOW_ORG_CREATE=false" \ -e "GF_AUTH_DISABLE_LOGIN_FORM=false" \ -e "GF_AUTH_ANONYMOUS_ENABLED=true" \ -e "GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer" \ -e "GF_ANALYTICS_GOOGLE_ANALYTICS_UA_ID=UA-157676508-1" \ -e "GF_SERVER_DOMAIN=denarius.pro" \ grafana/grafana We will port forward port 3000 from the container to our local machine, which grafana runs on port 3000. Throw in a plugin as an example. Add some variables so a user can't edit our public website, throw in analytics just because, and then give a domain. If you want you can remove all the -e lines. If you start to play around with that you would stop the container, remove the container and rerun whatever full run command you want. docker stop grafana docker rm grafana and then rerun without the -e lines. So running the above line we see Unable to find image 'grafana/grafana:latest' locally latest: Pulling from grafana/grafana cbdbe7a5bc2a: Pull complete ed18d4ca725a: Pull complete 5ac007dea7db: Pull complete 33b8e7fbf663: Pull complete 09cd2fb04616: Pull complete 990c0b335bdb: Pull complete Digest: sha256:4bbfcbf9372e1022bf51b35ec1aaab04bf46e01b76a1d00b424f45b63cf90967 Status: Downloaded newer image for grafana/grafana:latest d593501ecdb6633acc3be6b472fb86eebea10772e58bfaf130f7a05f53de2f94 Lets check its running from the command line. docker ps and we see CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d593501ecdb6 grafana/grafana "/run.sh" 31 seconds ago Up 30 seconds>3000/tcp grafana Its running, now check its working. by opening up a web browser and going to your IP or your local VM. I am using We want to sign in But whoops we can't create new accounts. Lets do that again docker stop grafana docker rm grafana Now rerun without any -e variables, just so we have entire control. docker run -d \ -p 3000:3000 \ --name=grafana \ --user $ID \ --volume "$PWD/data:/var/lib/grafana" \ grafana/grafana Reload the webpage and type admin and admin and we get prompted for a new password. Make a strong random password here. Firefox gives a random password, use something strong here. Now we see the full panel to start modifying. There is a lot going on, so I would suggest spending some time one day and clicking on every single button you see. And see where it goes. In the meantime lets start getting some stats running so when we modify the panel and dashboard we have some data to look at. Go back to the terminal and run InfluxDB container. docker run -d \ --name="influxdb" \ -p 8086:8086 \ -v /home/denarius/influxdb:/var/lib/influxdb \ influxdb -config /etc/influxdb/influxdb.conf Again we will see influxdb being pulled from dockerhub Unable to find image 'influxdb:latest' locally latest: Pulling from library/influxdb 1c6172af85ee: Pull complete b194b0e3c928: Pull complete 1f5ec00f35d5: Pull complete 256a3fda0bc5: Pull complete 189579438204: Pull complete 855d46376ade: Pull complete 5d599164b1bb: Pull complete 294856b09ff2: Pull complete Digest: sha256:68b3ff6f43ffdb6fe7cf067879e2b28788eff1ebf9b104443c8c2bf3e32c43cf Status: Downloaded newer image for influxdb:latest ecbd84add640c2becc090bf19448f6cc1fddf7bd569bf682bee9e083064f6e85 Check our containers running docker ps and we get CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ecbd84add640 influxdb "/entrypoint.sh -con…" 26 seconds ago Up 24 seconds>8086/tcp influxdb 4b197777eedc grafana/grafana "/run.sh" 4 minutes ago Up 4 minutes>3000/tcp grafana Easy now we have our main docker containers running. Lets grab 2 exchange API's and start throwing them in. The scripts I have use python not python 3, so we need to get python if we don't have it already. sudo apt install python Check we have Python 2 python --version We also need a couple other python related things. I am using requests in my scripts, and also we need a connector for python and influxdb. sudo apt install python-pip pip install requests pip install influxdb Now to get some scripts to test with. mkdir python cd python wget https://raw.githubusercontent.com/buzzkillb/snakes-on-a-chain/master/coingecko_southxchange_btc.py wget https://raw.githubusercontent.com/buzzkillb/snakes-on-a-chain/master/coingecko_tradeogre_btc.py The scripts throw SouthXchange and TradeOgre Denarius price in relation to BTC into 2 databases. Lets create those before we run these, as the database doesn't exist yet. Go into our influxdb docker container docker exec -it influxdb /bin/bash run influx in the container to modify the database influx We are in, good job. Lets check what databases are there. show databases This is a good sign, now create the databases for the 2 API calls. create database coingecko_southxchange_btc create database coingecko_trade_btc show databases exit and exit to get out of the container. exit exit Lets start running the scripts in a cronjob every minute to start populating the databases. I use nano, use whatever you like for the editor. crontab -e Insert these 2 lines to run the scripts every minute to pull the data. My username is denarius, you would put your username here for the full path. * * * * * $(which python) /home/denarius/python/coingecko_tradeogre_btc.py >> ~/cron.log 2>&1 * * * * * $(which python) /home/denarius/python/coingecko_southxchange_btc.py >> ~/cron.log 2>&1 When you save and exit the scripts will be running. Check the crontab log cat ~/cron.log I used the wrong database for TradeOgre. Follow the steps above to go back in and create the proper database called coingecko_ogre_btc. Lets drop the wrong one. drop database coingecko_trade_btc And check what we have now. show databases Very easy to modify things. So we have a database being populated while we walk through this. Lets connect the InfluxDB database into Grafana. Go back to your web browser and click the gear -> Data Source Add data source Type in influx, InfluxDB will pop up and click that. Lets go back to the terminal to find our local influxdb docker container ip address to connect to for the connection. docker inspect --format='{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -q) I get /influxdb /grafana influxdb will then connect to Now create a SouthXchange Data Source Then click Save & Test and if it worked we get a green Data source is working Click back and lets add TradeOgre Save & Test and lets move onto showing the data. Go to the left side of the screen and click + -> Dashboard Add new panel. First thing I do if I know I am mixing databases together is select Mixed from the dropdown box. For SouthXchange settings. Where distinct is you would click on mean() and then remove, then go to Aggregations and then Distinct. TradeOgre settings. For the first part of the graph panel I give the panel a name and then make the lines and gradient a little prettier. Now we want a side by side graph so BTC is on the left and USD is on the right. Go to Series overriders and type in the regex box and it will give you your ALIAS BY from the left side. Select this one first. Select Y-Axis and then 2. Do this again for TradeOgre USD and will look like this. Our left is BTC price and right is USD, also 8 decimals for for Lefy Y. To find the Unit, click Unit -> Currency and find USD and Bitcoin. Also change labels, and should look like this. I make the table so I can see more stats fast, and also force decimals to 8 places. Click refresh button to update the chart and then change colors to something different. I like blue and purple, then click Save. Give this a name, sometimes it doesn't stick, but lets try. Now we want to change refresh of the chart and also click star to make this a dashboard we can set for home. Its now starred and 1m refresh since the cronjob is set to run every 60 seconds. If we click on the top left grafana button we still get the stock dashboard. Lets change that. Gear -> Preferences I use these settings and then click save. Clicking the top left Grafana logo we now get our new chart as our homepage. Now lets stop and remove grafana and rerun with our variables. For the test I am going to remove the domain and analytics. docker stop grafana docker rm grafana docker command with those couple of -e lines removed for the final test. docker run -d \ -p 3000:3000 \ --name=grafana \ --user $ID \ --volume "$PWD/data:/var/lib/grafana" \ -e "GF_INSTALL_PLUGINS=grafana-worldmap-panel" \ -e "GF_USERS_VIEWERS_CAN_EDIT=false" \ -e "GF_USERS_EDITORS_CAN_ADMIN=false" \ -e "GF_USERS_ALLOW_SIGN_UP=false" \ -e "GF_USERS_ALLOW_ORG_CREATE=false" \ -e "GF_AUTH_DISABLE_LOGIN_FORM=false" \ -e "GF_AUTH_ANONYMOUS_ENABLED=true" \ -e "GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer" \ grafana/grafana Go back to your web browser, refresh and sign out. It kicks us back to the login screen. Retype the local IP of and now we can view our new dashboard. Make edits to sizing and stuff by signing back in and editting however you want. I left this line in the commands so I can easily edit dashboards and then stop rm and change this to true on the docker container restart, like this. -e "GF_AUTH_DISABLE_LOGIN_FORM=true" \ Once you get it going probably install fail2ban and ufw. And deny everything but whatever ports you end up needing to run this as a public site. Also maybe remove root password login and only use ssh keys. Then run behind cloudflare if that's your thing. Next will be a guide on how to include the daemon and some other stats.
  • Create New...