GitHub: A Practical Guide to Branches and Pull Requests

A Common “simple” workflow

I often see this sort of workflow:

git add .
git commit -m "asd"
git push

Many System Administrators and DevOps engineers start with this simple approach.

Being the sole user of version control they feeli satisfied with just having their code versioned.

The problem arises the day another team member joins or (what’s more likely) you join a larger team. This is important because larger teams require more structure and coordination to work effectively with version control, and will most likely require this from you before you get to work.

GitHub Pull Requests rules

When it comes to GitHub; you could argue that the Pull Request is GitHub’s greatest feature (maybe GitHub actions are the second greatest feature hehe).

Let’s look at a simple workflow that utilizes this feature.

It’s designed to promote collaboration and has features that promotes DevOps:

  • Knowledge Sharing
  • Code Review
  • Testing
  • Continuous Integration
  • Documentation
  • Team Collaboration
  • Audit Trail

And much much more!

The workflow (git kata)

Git provides a structured way to manage changes, review them, and roll back if needed using git branches.

A Team Workflow

Use this in a team environment where you have access to the repository.

  1. Update your local main:

    git checkout main
    git pull origin main
  2. Create and switch to new branch:

    git checkout -b feature-branch
  3. Make your changes and commit:

    git add .
    git commit -m 'Your commit message'
  4. Push to remote:

    git push -u origin feature-branch
  5. Create PR on GitHub from your branch to main

  6. After PR is merged, cleanup:

    git checkout main
    git pull origin main
    git branch -d feature-branch

Think of branches like snapshots of your system. Each branch is a safe place to make changes without affecting the main system.

Open Source Forking Workflow

When contributing to open source projects, you won’t have direct access to the main repository.

This is where forking comes in - it’s like creating your own copy of the project that you can modify freely.

Think of it as getting your own sandbox to play in, while still being able to share your changes with the original project.

  1. Fork the repository on GitHub

    • Click the “Fork” button in the top-right corner
    • This creates your own copy of the repository under your GitHub account
  2. Clone your fork to your local machine:

    git clone https://github.com/your-username/repo.git
  3. Add the original repository as upstream:

    git remote add upstream https://github.com/original-owner/repo.git
  4. Keep your fork in sync

    This is crucial - you need to make sure your local copy, your fork, and the original repository are all on the same page.

    Always make sure your fork is up to date before starting new work. You’ll minimize merge conflicts this way. This should be something you think about every time you open your IDE.

    git checkout main
    git pull origin main    # Get changes from your fork
    git pull upstream main  # Get changes from original repo
  5. Create your feature branch:

    git checkout -b feature-branch
  6. Make your changes and push to your fork:

    git add .
    git commit -m 'Your changes'
    git push -u origin feature-branch
  7. Create a Pull Request on GitHub

    • Go to your fork on GitHub
    • Click “Compare & pull request”
    • Select your feature branch to merge into the original repository’s main branch
  8. After your PR is merged, clean up:

    git checkout main
    git push -d origin feature-branch  # Delete remote branch
    git branch -d feature-branch       # Delete local branch
    git remote prune origin            # Clean up stale references

When to Use Each Workflow

  • Team Workflow: Use when you have direct access to the repository and are working with a team
  • Forking Workflow: Use when:
    • Contributing to open source projects
    • You don’t have write access to the main repository
    • You want to experiment with changes without affecting the main repository
    • You want to maintain your own version of a project

What do you mean Kata?

I first heard the term Kata from Michael Lombardi in his Getting GitHub repo (which I highly recommend you check out).

The idea being a small daily exercise to help you get used to the workflow and build memory through repetition.

In this case, the workflow I’ve described above can be used as a daily practice:

  1. Create a new branch
  2. Make a small change
  3. Create a PR
  4. Get it reviewed and merged, or removed by yourself if it’s not needed and is only for learning purposes etc
  5. Clean up

This is a safe, simple and powerful way to get used to the workflow and build memory through repetition, which is one of my favorite ways to learn something new!

But it’s annoying to click Merge after every git commit!

Gotcha, luckily there are some tools that can help you with this.

GitHub CLI

GitHub CLI is a command line tool that allows you to interact with GitHub from the terminal.

Learning and using the mentioned workflow, the GitHub CLI and utilizing auto-merge can automate this. It wont be as simple as pusing directly to main, but it’ll be a good compromise and sooner or later you’ll have to learn it.

Key takeaways

Use this workflow if:

  • If your repo contains code that you value
  • You want to collaborate with others
  • You want to learn git and github
  • You want to be a better developer
  • If you’re using git + github at work

Do:

  • Practice the workflow daily
  • Create a tool to help you remember the sequence of commands at first (I created my own python script)

Have fun and happy learning!

Open Source: Launch Flatpak Apps Faster with This One-Liner

I’m using sway on arch (btw) and naturally I need a terminal GUI app launcher. I was quite surprised how well it works, prerequirements are fzf and flatpak:

flatpak run (flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &

How It Works

  • flatpak list --app --columns=application
    Lists all installed Flatpak applications, showing only their application IDs.
  • fzf
    Lets you interactively search and select an app from the list.
  • flatpak run (...)
    Runs the selected app.
  • > /dev/null 2>&1 &
    Hides any output and runs the app in the background.

Flatpak CLI Details

  • --app
    Filters the list to only show applications (not runtimes or extensions).
  • --columns=application
    Shows only the application ID, making the output clean and easy to use with scripts.

Bash Function for Easy Use

Add this to your ~/.bashrc or ~/.bash_profile to use it as runfp:

runfp() {
  flatpak run $(flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &
}

Lately I started using fish-shell, so my function looks like this:

function runfp
  flatpak run (flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &
end

I love the way fish-shell has a default workflow for creating functions, it’s so easy to use and maintain.

function # hit enter
    #write your function
end

funcsave function-name

# git commit and push to github

Version Control Your Shell Configurations

As mentioned, I version control my custom shell functions.

You can version control your ~/.config/fish/functions folder (for fish) or your ~/.bashrc (for bash) using Git and GitHub, it’s possible with bash and zsh too, but you have to set it up yourself if you want the same workflow that is.

When I use bash I usually version control my ~/.bashrc as a github gist, but I’m not a fan of it, because it’s not as modular as self-contained functions in a repo.

That said, a gist is great because you can use the github CLI to edit or create a gist from the terminal!

# create a gist
gh gist create --source=. --remote=origin --push

# list gists
gh gist list

# edit a gist
gh gist edit <gist-id> --add ~/.config/fish/functions/mycoolfunc.fish

Too easy!

How to do it with GitHub CLI:

  • Initialize a repo and push to GitHub:
    cd ~/.config/fish
    git init
    git add functions/
    git commit -m "Track my fish functions"
    gh repo create --public --source=. --remote=origin --push
    # Or for bash:
    cd ~
    git init
    git add .bashrc
    git commit -m "Track my bashrc"
    gh repo create --public --source=. --remote=origin --push

Benefits:

  • Never lose your dear functions!
  • Sync your shell setup across multiple machines
  • Share your functions with others
  • Fun!

Happy function-writing!

GitOps: ITSM tools are not DevOps tools

A bit of a high level design rant, so pardon the fluff.

Problem: ITSM Tools Often Block Iteration Speed

A typical AWS Landing Zone workflow:

  • User submit requests through ServiceNow for account access

  • The Catalog Item (ServiceNow request form) is not that great and tricky to use

  • Approvals may take days

  • Once approved, the DevOps pipeline is triggered, but because of hardship with creating sensible API calls between an ITSM system and a DevOps tool, it’s harder to sanitize the data, making the run more error prone

  • The user is presented with 200 lines of different logs after days of waiting

Additional problems with above workflow

  • DevOps teams struggle with improving the workflow due to a overworked ITSM team faced with compliance and audit requirements

  • The actual provisioning happens outside ITSM and the response messages are not that great (formating is hard to do right here)

  • The DevOps engineer tasked with this work is usually not stoked about doing it

The core issue: ITSM tools are good at simple CRUD operations, but most of them are not DevOps tools.

Solution: Onboard the user to GitHub

Use ITSM tools for what they’re good at (access requests and compliance) while letting GitHub handle the DevOps pipeline.

Phase 1: ITSM tool Manages Repository Access

Developers request access to the infrastructure provisioning repository through standard ITSM processes:

  1. ServiceNow Request: “Access to aws-account-factory GitHub repository”
  2. Justification: “Need development environment for ML project”
  3. Approval Chain: Manager → Security → Infrastructure Team

Once approved, developers receive:

  • GitHub repository access
  • Documentation on account request process
  • YAML templates for their specific use case

Phase 2: GitHub Handles Technical Implementation

With repository access granted, the user creates a PR with an edited .yaml template, and the feedback loop can begin (Dev + Ops).

# accounts/engineering/ml-project-dev.yaml
account_name: "ml-project-development"
environment: "development"
cost_center: "engineering"
team_lead: "sarah@company.com"
compliance_level: "standard"

Phase 3: GitHub PR Drives the Workflow

The pull request becomes the technical collaboration space:

# .github/CODEOWNERS
accounts/engineering/* @infrastructure-team @security-team
accounts/production/* @infrastructure-team @security-team @compliance-team

This step will catch many bugs and help the complete engineering organization to be more efficient.

Phase 4: Automated Integration Back to ServiceNow

GitHub Actions provisions infrastructure and updates ServiceNow with the Configuration Item (CI):

# .github/workflows/provision-account.yml
name: Provision AWS Account
on:
  push:
    paths: ['accounts/**/*.yaml']
    
jobs:
  provision:
    runs-on: ubuntu-latest
    steps:
      - name: Terraform Apply
        run: terraform apply -auto-approve
      - name: Update ServiceNow CI
        run: |
          curl -X POST "$SERVICENOW_API/cmdb_ci_aws_account" \
            -H "Authorization: Bearer $SERVICENOW_TOKEN" \
            -d '{
              "account_id": "${{ vars.AWS_ACCOUNT_ID }}",
              "environment": "${{ vars.ENVIRONMENT }}",
              "cost_center": "${{ vars.COST_CENTER }}",
              "provisioned_date": "${{ vars.CURRENT_DATE }}"
            }'          

ITSM compliance: ServiceNow maintains complete configuration item (CI) records and audit trails while technical teams work in their preferred tools.

Technical Implementation

Use a proper Infrastructure as Code (IaC) tool to provision the infrastructure, great examples are Terraform or Pulumi.

Repository Structure

aws-account-factory/
├── docs/
│   ├── getting-started.md
│   └── templates/
├── accounts/
│   ├── engineering/
│   ├── security/
│   └── production/
├── terraform/
│   ├── modules/
│   └── environments/
├── scripts/
│   └── copy-template.sh/
└── .github/
    ├── workflows/
    └── CODEOWNERS

API Integration

GitHub Actions updates ServiceNow automatically:

- name: Update ServiceNow CMDB
  run: |
    curl -X POST "$SERVICENOW_API/cmdb_ci_aws_account" \
      -d '{
        "account_id": "${{ vars.AWS_ACCOUNT_ID }}",
        "environment": "${{ vars.ENVIRONMENT }}",
        "cost_center": "${{ vars.COST_CENTER }}"
      }'    

Why This Works Better

Speed Improvements

  • PR feedback is immediate, not dependent on ITSM ticket updates
  • GitHub Actions runs in parallel, not sequential ITSM workflow steps
  • Developers can iterate on configurations without going back through forms

Better Collaboration

  • Infrastructure teams review actual YAML configurations
  • Security teams can suggest specific code changes
  • All discussions happen with full technical context

Discussion

ITSM tools and DevOps tools solve different problems. ServiceNow is good at managing access and compliance workflows. GitHub is industry leading at technical collaboration and automation.

This approach uses both tools for what they do well. The API integration keeps ServiceNow updated while letting technical teams work efficiently.

Have fun, now you only have about 25,000 lines of code to write!

Happy building!

Git Clone vs Fork - What's the Difference?

This is something I’ve sort of understood but never quite got around to stamp out the differences, so I felt like sharing it!

Lets look at the key differences between Git clone and Git(Hubs) fork operations, and when to use which one.

Git Clone

Cloning creates a local copy of a repository. When running git clone, you get:

  • The entire .git directory
  • All branches and history
  • A working copy of the files
  • Remote tracking to the original repo (origin)

Example:

git clone https://github.com/user/repo
cd repo
git remote -v # Shows origin pointing to source

GitHub Fork

Forking is a GitHub feature (not Git) that creates your own copy of a repo on GitHub. Key points:

  • Lives on GitHub under your account
  • Independent from the original repo
  • Enables pull request workflow
  • Can sync with original (upstream) repo

Typical fork workflow:

# 1. Fork via GitHub UI
# 2. Clone your fork
git clone https://github.com/YOUR-USERNAME/repo
# 3. Add upstream remote
git remote add upstream https://github.com/ORIGINAL-OWNER/repo
# 4. Create branch and work
git checkout -b feature-branch

When to Clone

Use clone when:

  • You have write access to the repo OR
  • You just need a local copy to work with
  • You’re doing internal development
  • You don’t plan to contribute back but just want to run the code locally

When to Fork

Fork when:

  • Contributing to open source projects
  • You need your own version of a project
  • You don’t have write access to original repo
  • You want to propose changes via pull requests

Keeping Forks Updated

To sync your fork with upstream:

git fetch upstream
git checkout main
git merge upstream/main
git push origin main

Key Differences Between Forking and Cloning

Here’s a breakdown of the technical and practical differences:

Aspect Forking Cloning
Scope Creates a copy of the repository on GitHub under your account. Creates a local copy of a repository on your machine.
Location Server-side (on GitHub). Local (on your computer).
Ownership You own the fork and have full control over it. You don’t own the repository; you just have a local copy.
Collaboration Designed for contributing to projects via pull requests. Primarily for local development or direct pushes (if you have access).
Upstream Relationship Forked repo is independent but can sync with the original via remotes. Cloned repo is tied to the original remote (origin) unless reconfigured.
Use Case Ideal for contributing to open-source projects or maintaining a derivative. Ideal for local development, testing, or private work.
Git Command Not a Git command; it’s a GitHub feature. More on this below on using the GitHub CLI! git clone <url> is a native Git command.

Using GitHub CLI

The GitHub CLI (gh) makes working with forks and clones even easier:

# Fork and clone in one command
gh repo fork user/repo --clone=true

# Just fork (no clone)
gh repo fork user/repo

# Clone your existing fork
gh repo clone YOUR-USERNAME/repo

# Create PR from your fork
gh pr create --base main --head YOUR-USERNAME:feature-branch

Pro tip: gh repo fork automatically sets up the upstream remote for you, saving the manual git remote add upstream step.

Installing the GitHub CLI

# macos
brew install gh
# windows
winget install -e --id GitHub.cli -s winget
# linux - thanks -> https://dev.to/raulpenate/begginers-guide-installing-and-using-github-cli-30ka

# Arch
sudo pacman -S github-cli

# Debian, Ubuntu Linux, Raspberry Pi OS (apt)
(type -p wget >/dev/null || (sudo apt update && sudo apt-get install wget -y)) \
&& sudo mkdir -p -m 755 /etc/apt/keyrings \
&& wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
&& sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
# Upgrade
sudo apt update
sudo apt install gh

# Fedora, CentOS, Red Hat Enterprise Linux (dnf)
sudo dnf install 'dnf-command(config-manager)'
sudo dnf config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo
sudo dnf install gh --repo gh-cli
#Alternatively, install from the community repository:
sudo dnf install gh
#Upgrade
sudo dnf update gh

# openSUSE/SUSE Linux (zypper)
sudo zypper addrepo https://cli.github.com/packages/rpm/gh-cli.repo
sudo zypper ref
sudo zypper install gh
# Upgrade
sudo zypper ref
sudo zypper update gh

Common Fork Workflow

Here’s a typical workflow I use:

# Fork and clone
gh repo fork original-owner/repo --clone=true

# Create feature branch
git checkout -b my-feature

# Make changes, then commit
git add .
git commit -m "feat: add awesome feature"

# Push to fork
git push -u origin my-feature

# Create PR using GitHub CLI
gh pr create

Closing Thoughts

Both forking and cloning use Git’s object model (blobs, trees, commits), but:

  • Clone: Local copy of Git objects
  • Fork: Server-side copy with independent Git refs

The main difference is where the copy lives and how you can interact with the original repo.

Also, when cloning a repo, be aware of how you’ve authenticated. If you clone using SSH, you’ll need to add the SSH key to your GitHub account.

If you clone using HTTPS, you’ll need to enter your GitHub username and password.

I highly recommend using SSH or the GitHub CLI to clone repos - it’s much easier once you’ve set it up.

Happy Cloning and Forking!

Python: Virtual Environments

Python Virtual Environments (venv) is a python module that limits dependency and version conflict by:

  • Isolating the python environment
  • Separating dependencies on a project basis

Usage

# Create a venv:
python3 -m venv .venv

# Active venv:
source env/bin/activate

# On windows
.\env\Scripts\activate

# deactivate
deactivate

Workflow

  1. Create a venv using the mod (-m) argument or via IDE (VSCode -> ctrl + shift + p -> Python: Create env)
  2. Activate it using your OS specific way
  3. Add .venv (venv name in the example above) to .gitignore
  4. Develop python code, install packages: pip install <package>
  5. Once done, freeze requirements: pip freeze > requirements.txt
  6. Recreate exact environment on another host: pip install -r requirements.txt

Mucho Importante

  1. Always activate the venv when working on the project
  2. .gitignore the venv name, the users will build it locally
  3. Use pip freeze after installing new dependencies to update the requirements.txt

Always look on the bright side of isolation ✅

Happy coding

PowerShell: Restore a DNS zone in Active Directory

Beware of the copy-paste trap! Always test public code in a safe, isolated environment before running it in production.

The fast version

Did someone just remove a very important AD-integrated DNS forward lookup zone for you?

Hang tight, and i’ll show you how to get it back.

  1. Using Domain Admin access rights, have any type of elevated PowerShell session open with the DNSServer and activedirectory module imported
  2. Open notepad and save the script below as “Restore-ADDNSZone.ps1” at any location
  3. .\Restore-ADDNSZone.ps1 -ZoneName ‘myzone.org’
  4. If the zone was just deleted and the DC has access to the deleted zone objects, your zone will be restored. Verify by looking in DNS management.

If you’re not in a hurry, I recommend that you read what the script does first and test it in lab.

The output should look similar to this

DNS Zone restore the simple way

I wrote a simple script to demonstrate how a DNS zone restore can be achived using the Restore-ADObject cmdlet:

  • Importing Required Modules: Loads ActiveDirectory and DnsServer modules.
  • Setting Parameters: Allows specifying a DNS zone name, defaulting to “ehmiizblog”.
  • Searching for Deleted Zone: Looks for the deleted DNS zone in known AD locations.
  • Retrieving Deleted Records: Fetches resource records for the deleted zone.
  • Restoring Zone & Records: Restores the DNS zone and its records to their original names.
  • Restarting DNS Service: Restarts the DNS service to apply changes.
  • Output Messages: Provides feedback on the restoration progress and completion.
#Requires -Version 5.0 -Modules DnsServer, ActiveDirectory
param(
[string]$ZoneName = "ehmiizblog"
)
<#
.Synopsis
Restores a DNS zone using the DNSServer & ActiveDirectory module
.DESCRIPTION
An AD-integrated DNS primary zone can be quickly restored, with
all it's records using this script. The script looks in known
locations for the deleted zone and it's resource records (`dnsZone`
& `dnsNode` objects). The restored zone is also renamed to it's
original name.
.EXAMPLE
.\Restore-ADDNSZone.ps1 -ZoneName 'myzone.org'
.NOTE
Run this in a lab setting before you try it in prod
#>
Import-Module ActiveDirectory, DnsServer -ErrorAction Stop
$DomainDN = (Get-ADDomain).DistinguishedName
[System.Collections.ArrayList]$global:DeletedZoneDN = @(
"CN=MicrosoftDNS,CN=System,$DomainDN"
"DC=DomainDnsZones,$DomainDN"
)
function Get-DeletedDNSZone {
param(
[string]$ZoneName = $ZoneName
)
$DeletedZoneDN | ForEach-Object {
# Define the lookup parameters
$FindZoneSplat = @{
LDAPFilter = "(&(name=*..Deleted-$($ZoneName)*)(ObjectClass=dnsZone))"
SearchBase = $_
IncludeDeletedObjects = $true
Properties = "*"
}
# Look for the zone
$LookForTheZone = Get-ADObject @FindZoneSplat
if (-not [System.String]::IsNullOrEmpty($LookforTheZone)) {
$LookForTheZone | Select-Object -First 1
}
}
}
function Get-DeletedDNSZonesResourceRecords {
# Get the deleted DNS zone
$DeletedZone = Get-DeletedDNSZone
if ([string]::IsNullOrEmpty($DeletedZone)) {
Write-Warning -Message "Zone: $ZoneName not found."
Break
}
# Convert WhenChanged to UTC and format to LDAP Generalized Time
$DeletionTimeStamp = $DeletedZone.whenChanged
# Iterate over each DeletedZoneDN
$DeletedZoneDN | ForEach-Object {
Get-ADObject -Filter { WhenChanged -ge $DeletionTimeStamp -and ObjectClass -eq 'dnsNode' -and isDeleted -eq $true } -SearchBase $_ -IncludeDeletedObjects
}
}
$TheDeletedZone = Get-DeletedDNSZone
$TheDeletedRecords = Get-DeletedDNSZonesResourceRecords
if ($TheDeletedZone -and $TheDeletedRecords) {
Write-Output "Starting the zone restore.."
$TheDeletedZone | Restore-ADObject -NewName $ZoneName -Verbose -ErrorAction Stop
$TheDeletedRecords | Restore-ADObject -Verbose -ErrorAction Stop
Restart-Service DNS -Verbose -ErrorAction Stop
Write-Output "Zone restore completed."
}

Didn’t work, what now

If you have access to a backup of the DNS server, you can export a .dns file and rebuild the zone on the production server.

The steps below will vary largely on your situation, but it might give you an idea of the process:

Sidenote:Tthe “Above explained” points adds further explenation to the command we ran in the previous step.

  1. Connecto to the backup DC
  2. Export the zone using dnscmd: dnscmd /ZoneExport zone.org zone.org_backup.dns
  3. Attached a disk or storage device to the DC, mount it and moved the newly created zone data file zone.org_backup.dns
  4. Attached the disk to the PDC
  5. Copied the file to system32\dns
  6. Create the new zone using dnscmd:
    • dnscmd SERVER /zoneadd zone.org /primary /file zone.org_backup.dns
    • Above explained: Adds a zone to the DNS server.
    • dnscmd SERVER /zonereload zone.org
    • Above explained: Copies zone information from its source.
  • This creates a non AD integrated DNS zone with resource records from the export
  1. Convert the zone from non-ad integrated into the AD integrated
    1. dnscmd SERVER /zoneresettype zone.org /dsprimary
    2. Above explained: Creates an active directory integrated zone.

References:

Happy restoring

Linux on GU605MI: Sound, Keyboard, Brightness & asusctl

Disclaimer: Please note that while these steps have been provided to assist you, I cannot guarantee that they will work flawlessly in every scenario. Always proceed with caution and make sure to back up your data before making any significant changes to your system.

Written on 2024-05-08 (Note: Information may become outdated soon, and this was just my approach)

If you’re a proud owner of the 2024 Asus Rog Zephyrus G16 (GU605MI) and running Fedora 40+, ensuring smooth functionality of essential features like sound, keyboard, screen brightness, and asusctl might require a bit (hehe) of tweaking.

Here’s a comprehensive guide, or really the steps I took, to get everything up and running.

Ensure Kernel Compatibility

First things first, ensure that your kernel version is at least 6.9.*. If you’re on a newer kernel, skip this step.

Kernel 6.9 has audio improvements for Intels new 14th gen CPUs, so it’s mandatory for the GU605 to have it.

You might want to research on how to perform this in a safer way.

I trust in Fedora and the Copr build system, so I just executed the following:

sudo dnf copr enable @kernel-vanilla/mainline
sudo dnf update -y
# Wait for transactions to complete (may take 5+ minutes)
systemctl reboot

Follow the Fedora Workstation Guide

Refer to the Fedora Workstation guide provided by Asus: Fedora Guide. The steps I took myself where the following:

# Updates the system
sudo dnf update -y
sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

# Installs the nvidia driver
sudo dnf update -y
sudo dnf install kernel-devel
sudo dnf install akmod-nvidia xorg-x11-drv-nvidia-cuda

# Enable hibrenate
sudo systemctl enable nvidia-hibernate.service nvidia-suspend.service nvidia-resume.service nvidia-powerd.service

# Install asusctl and superfxctl, used to interact with the system
# Installs Rog Control gui (to interact with the command line interfaces graphically)
sudo dnf copr enable lukenukem/asus-linux
sudo dnf update

sudo dnf install asusctl supergfxctl
sudo dnf update --refresh
sudo systemctl enable supergfxd.service

sudo dnf install asusctl-rog-gui

Install Firmware, needed as of 2024-05-08

In the future the firmware might be added into the linux-kernel, if the sound works great after you’ve updated the system, skip this step.

The sound will not work without the correct firmware, we can clone down the correct firmware and copy it over to our system using the following lines:

git clone https://gitlab.com/kernel-firmware/linux-firmware.git
cd linux-firmware
sudo dnf install rdfind
make install DESTDIR=installdir
sudo cp -r installdir/lib/firmware/cirrus /lib/firmware
systemctl reboot

Fix Screen Brightness

The screens brightness works out of the box while on the dGPU.

However that comes with certain drawbacks, like flickering electron applications and increase in power consumption. The steps below gets the screen brightness controls to work in “Hybrid” and “Integrated” mode (while the display is being ran by the iGPU).

Open the grub configuration file:

sudo nano /etc/default/grub

Add the following string at the end of the line GRUB_CMD_LINE_LINUX=:

quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0

After editing, the line should look like this:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.driver.blacklist=nouveau modprobe.blacklist=nouveau rhgb quiet rd.driver.blacklist=nouveau modprobe.blacklist=nouveau acpi_backlight=native quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

Update the grub configuration and reboot:

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
systemctl reboot

With these steps, I was able get a somewhat functional GU605MI Fedora system. If you encounter any issues, refer to the respective documentation or seek further assistance from the Asus-Linux community.

Happy computing!

PowerShell Guide: Script as a Windows Service

Red or blue pill

If you are in the same rabbit-hole as I was of setting up a Windows Service of any form of looping script, there’s two pills you can choose from:

  1. Red Pill: Create a program that abide to the law of the fearsome Service Control Manager.

  2. Blue Pill: Write a PowerShell script, 8 lines of XML, and download WinSW.exe

WinSW describes itself as following:

A wrapper executable that can run any executable as a Windows service, in a permissive license.

Naturally as someone who enjoys coding with hand grenades, I took the Blue Pill and here’s how that story went:

The Blue Pill

  1. Create a new working directory and save it to a variable
$DirParams = @{
    ItemType    = 'Directory'
    Name        = "PowerShell_Service"
    OutVariable = 'WorkingDirectory'
}
New-Item @DirParams
  1. Download the latest WinSW-x64.exe to the working directory
# Get the latest WinSW 64-bit executable browser download url
$ExecutableName = 'WinSW-x64.exe'
$LatestURL = Invoke-RestMethod 'https://api.github.com/repos/winsw/winsw/releases/latest'
$LatestDownloadURL = ($LatestURL.assets | Where-Object {$_.Name -eq $ExecutableName}).browser_download_url
$FinalPath = "$($WorkingDirectory.FullName)\$ExecutableName"

# Download it to the newly created working directory
Invoke-WebRequest -Uri $LatestDownloadURL -Outfile $FinalPath -Verbose
  1. Create the PowerShell script which the service runs

This loop checks for notepad every 5 sec and kills it if it finds it

while ($true) {
    $notepad = Get-Process notepad -ErrorAction SilentlyContinue
    if ($notepad) {
        $notepad.Kill()
    }
    Start-Sleep -Seconds 5
}
  1. Construct the .XML file

Just edit the id, name, description and startarguments

<service>
  <id>PowerShellService</id>
  <name>PowerShellService</name>
  <description>This service runs a custom PowerShell script.</description>
  <executable>C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe</executable>
  <startarguments>-NoLogo -file C:\Path\To\Script\Invoke-PowerShellServiceScript.ps1</startarguments>
  <log mode="roll"></log>
</service>

Save the .xml, in this example I saved it as PowerShell_Service.xml

# if not already, step into the workingdirectory
cd $WorkingDirectory.FullName

# Install the service
.\WinSW-x64.exe install .\PowerShell_Service.xml

# Make sure powershell.exe's executionpolicy is Bypass
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine

# As an administrator
Get-Service PowerShellService | Start-Service

Conclusion

Running a PowerShell script as a service on any windows machine isn’t that complicated thanks to WinSW. It’s a great choice if you don’t want to get deeper into the process of developing windows services (it’s kind of a fun rabbit-hole though).

I recommend reading docs of WinSW.

Some things to consider:

  • The service will run PowerShell 5.1 as System
  • Meaning the executionpolicy must be supporting that usecase (bypass as local machine will do)
  • The script in this example is just a demo of a loop, but anything you can think of that loops will do here
  • Starting the Service requires elevated rights in this example
  • If you get the notorious The service did not respond to the start or control request in a timely fashion, you have my condolences (This is a very general error msg that has no clear answer by itself it seems)

Good luck have fun, happy coding

/Emil

How to Restore a File from Git

Git is a powerful and popular version control system, sometimes a bit too powerful.

Depending on how your day went, you may want to restore a file from git to a previous state, either because you made an oopsie, want to undo some changes, or need to compare different versions.

Let’s go through four common scenarios on how to do just that!

Scenario 1: Saved Locally on the Local Git Repository

The simplest scenario is when you have saved your file locally on your local git repository, but have not staged or committed it yet.

In this case, you can use the git restore command to discard the changes in your working directory and restore the file to the last committed state.

For example, if you want to restore a file named index.html, you can run the following command:

git restore index.html

This will overwrite the index.html file in your working directory with the version from the HEAD commit, which is the latest commit on your current branch.

You can also use a dot (.) instead of the file name to restore all the files in your working directory.

git restore .

Scenario 2: Saved Locally and Staged Locally

The next scenario is when you have saved your file locally and staged it locally, but have not committed it yet.

In this case, you can use the git restore –staged command to unstage the file and remove it from the staging area.

For example, if you want to unstage a file named index.html, you can run the following command:

git restore --staged index.html

This will remove the index.html file from the staging area and leave it in your working directory with the changes intact. You can then use the git restore command as in the previous scenario to discard the changes in your working directory and restore the file to the last committed state. Alternatively, you can use this command:

git restore --source=HEAD

To unstage and restore the file in one step.

For example, if you want to unstage and restore a file named index.html, you can run the following command:

git restore --source=HEAD index.html

This will remove the index.html file from the staging area and overwrite it in your working directory with the version from the HEAD commit. You can also use a dot (.) instead of the file name to unstage and restore all the files in your staging area and working directory.

Scenario 3: Saved Locally, Staged Locally and Committed

The third scenario is when you have saved your file locally, staged it locally and committed it, but have not pushed it to the remote repository yet. In this case, you can use the git reset –hard command to reset your local branch to the previous commit and discard all the changes in your staging area and working directory. For example, if you want to reset your local branch to the previous commit, you can run the following command:

git reset --hard HEAD~1

This will reset your local branch to the commit before the HEAD commit, which is the latest commit on your current branch.

This will also discard all the changes in your staging area and working directory, including the file you want to restore.

You can then use the git checkout command to check out the file from the previous commit and restore it to your working directory.

For example, if you want to check out and restore a file named index.html from the previous commit, you can run the following command:

git checkout HEAD~1 index.html

This will check out the index.html file from the commit before the HEAD commit and overwrite it in your working directory with the version from that commit.

You can also use a dot (.) here as well, to check out and restore all the files from the previous commit.

Scenario 4: Saved Locally, Staged Locally, Committed and Pushed to Remote Repository

The fourth and final scenario is when you have saved your file locally, staged it locally, committed it and pushed it to the remote repository.

In this case, you can use the git revert command to create a new commit that reverses the changes in the previous commit and restores the file to the state before that commit.

For example, if you want to revert the previous commit and restore a file named index.html to the state before that commit, you can run the following command:

git revert HEAD

This will create a new commit that reverses the changes in the HEAD commit, which is the latest commit on your current branch.

This will also restore the index.html file in your working directory and staging area to the version from the commit before the HEAD commit.

You can then push the new commit to the remote repository to update it with the reverted changes.

You can also use the –no-commit option to revert the changes without creating a new commit, and then use the git restore or git checkout commands as in the previous scenarios to restore the file to the desired state.

To sum it up

We’ve demonstrated how to restore a file from git in four different scenarios, depending on how far you have progressed in the git workflow.

We have used the git restore, git reset, git checkout and git revert commands to discard, unstage, check out and revert changes in your files and restore them to the previous states.

I hope this post has been helpful and maybe even saved some headache!

If you have any questions or feedback, please feel free to DM me on Twitter or LinkedIn.

Happy coding

PowerShell 7.4: Install-Module is evolving.

Where does Install-Module come from?

Install-Module has evolved.

Have you ever asked yourself, what module imports the Install-Module cmdlet? It’s kind of a meta question, check for yourself! Spoiler a bit down for anyone reading on mobile.

Get-Command -Name Install-Module
    CommandType     Name                                               Version    Source
    -----------     ----                                               -------    ------
    Function        Install-Module                                     2.2.5      PowerShellGet

New sheriff in town 🤠

With the GA release of PowerShell 7.4 a rewrite of PowerShellGet is included, (hint hint, renamed to PSResourceGet), and boy is it fast.

I installed PowerShell 7.4 on two different Ubuntu 20.04 WSL distros, and I installed a few modules to benchmark the old trusty Install-Module and the new sheriff in town: Install-PSResource.

The results speak for themselves. PSResourceGet is much faster then PowerShellGet V2.

Speaking about PowerShellGet V2, there’s still a future for this module, but instead of new APIs and features, V3 (currently in pre-release) was converted to a compatibility layer over to the new and faster PSResourceGet.

Install-Module -Name PowerShellGet -AllowPrerelease -Force

The parameters of the new PSResourceGet is not supported from calling the older cmdlets, and there’s no official documentation out for PowerShellGet V3 yet, so to me this seems purely for pipeline scenarios where you have code in place that can just use the new functionality. It has less to do with interactive use it seems. Here’s some further reading on the subject.

Let’s take PSResourceGet for a spin

PSResourceGet seem to me an awesome new module based on it’s speed, so better get used to it’s new syntax because this will be my new main driver for sure.

Get-Command -Module Microsoft.PowerShell.PSResourceGet | sort Name
CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Find-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-InstalledPSResource                            1.0.1      Microsoft.PowerShell.PSResourceGet
Alias           Get-PSResource                                     1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Function        Import-PSGetRepository                             1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Install-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          New-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Publish-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Register-PSResourceRepository                      1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Save-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Set-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Test-PSScriptFileInfo                              1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Uninstall-PSResource                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Unregister-PSResourceRepository                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSModuleManifest                            1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSResource                                  1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSScriptFileInfo                            1.0.1      Microsoft.PowerShell.PSResourceGet

What’s installed?

It’s not only installing modules that’s faster, it’s also very fast at getting installed modules.

Getting installed modules can be very time-consuming on shared systems, especially where you have the az modules installed, so this is a great performance win overall.

Find new stuff

Finding new modules and scripts is also a crucial part of PowerShell, especially for the community members. I would argue with PSResourceGet going GA, PowerShell 7.4 is probably one of the most significant performance boosters of PowerShell (in it’s open source life).

As you can see, finding modules is way faster, and here we’re even using two-way wildcards.

What about publishing?

Let’s try to use the new Publish-PSResource. I have a minor bug-fix to do on my project linuxinfo and will edit my publishing script so that github action will publish it for me using Publish-PSResource.

I start with editing my very simple publishing script. Since I don’t know if the github-hosted runner will have PSResourceGet installed yet, I need to validate that the cmdlet is present before calling it. If it’s not, I’m simply installing it using PowerShellGet v2.

This should do it!

Hmm, seems like I messed something up. The Github-hosted runner can’t find Publish-PSResource so, it’s trying to install PSResourceGet using Install-Module. However I miss-spelled the module name if you look closely at line 7. It should be Microsoft.PowerShell.PSResourceGet, let’s fix that and re-run my workflow.

Looks way better now!

And there’s a new version of linuxinfo with a minor bugfix. And the Publish-PSResource migration was very straightforward.

Conclusion

In this post, we learned about the origin of Install-Module, being PowerShellGet v2, and it’s predecessor Install-PSResource, being PSResourceGet. We took some cmdlets for a spin and realized that, the new version is easily twice as fast, in some cases even 3 times faster.

We covered PowerShellGet V3 being a compatibility layer and some caveats with it.

We looked at migrating a simple publishing script from Publish-Module to Publish-PSResource.

I recommend to poke around with the new PSResourceGet cmdlets and read it’s official documentation, and for interactive use not rely on any compatibility layer, save that for the edge-cases.

Thanks for reading this far, hope you found it helpful. PM me on twitter for any feedback.

Happy coding

/Emil