Greener Homes Grant program proves more popular than federal government expected
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    9mo ago 100%

    I got approved a few months ago and will have solar panels installed in the next few months. I am happy to hear there will be a second stage as I want to get my windows replaced next. If AB could elect a competent government I would also replace my mid-efficient furnace.

    3
  • opentofu.org

    The first stable release of OpenTofu (the fork of Terraform) has now been released. This release is lagging behind the current 1.6.6 release of Terraform, but it is a big first step. This release is backwards compatible with Terraform 1.6.0 and includes a few new features. The big new features: - [Module testing feature](https://opentofu.org/blog/what-we-learned-while-working-on-opentofus-new-test-feature/) - [Enhanced S3 state backend](https://github.com/opentofu/opentofu/issues/700) - [Replacement default registry (registry.opentofu.org) to replace registry.terraform.io.](https://github.com/opentofu/registry) [Change log](https://github.com/opentofu/opentofu/blob/v1.6/CHANGELOG.md)

    4
    1
    aws.amazon.com

    Amazon has finished setting up their second Canadian AWS region. This is big news for anyone in western Canada as regional public cloud coverage has been non-existent, until now. Previously, your only options had been eastern Canada (Montreal) and eastern Canada (Toronto). This is also big news for data sovereignty on AWS. Previously, you didn't have a option for a Canadian disaster recovery region. AWS only had a single Canadian region (ca-central-1), so your DR site would need to be in another country. To use this region, you will need to enable the region under your billing dashboard as new regions are not enabled by default. This region has 3 AZs, which would be required if you do proper clustering. For the longest time, the ca-central-1 (Montreal) region only had 2 AZs. I remember getting asked in a job interview how many AZs ca-central-1 had and I correctly answered 2. They were convinced all regions had a minimum of 3 AZs and I got docked points. I am still fuming. Warning: Advanced technical networking and location ramblings below The new region has [2 - 100G connections](https://yycix.ca/peers.html) to the [Calgary Internet Exchange Point IXP (YYCIX)](https://yycix.ca/). They terminate at Equinix CL-3 and DataHive. I suspect the first AZ is the standalone AWS datacentre just off of Glenmore east whose location had leaked. The second one is probably located west downtown (just outside of the ~~100~~ 25 year flood plain) close to DataHive. The DataHive datacentre is tiny so co-locating an entire Amazon AZ is not happening. Downtown Calgary has plenty of cheap office space for a datacentre conversion. The third AZ is probably co-located at Arrow DC2 south of the airport or eStruxture CAL-2 up past the airport. Co-location would explain why there isn't a third connection to YYCIX. As this region is directly connected to YYCIX, this means traffic will not be routing down to the Seattle IXP, unless you yourself are on a local ISP (Shaw) that doesn't connect to YYCIX yet. I didn't believe this rumour was still true, but I did some digging and I am not seeing a [YYCIX connection registered](https://www.peeringdb.com/net/317) for Shaw.

    5
    1
    OpenTF has been renamed OpenTofu
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    I think it is the best option of all the possible choices I have seen and I can see how the 'Open' they tacked on is required for finding the project through searches. Adoption would have been be awful if they stuck with just 'Tofu'. Adoption of tofu as a meat substitute could have improve, though.

    1
  • www.linuxfoundation.org

    It looks like the 'TF' part of OpenTF was too similar to Terraform and they have come up with a new name for the project. In addition, the project is now a part of the Linux Foundation and they have a new website. https://opentofu.org/

    21
    3
    https://health.aws.amazon.com/health/status

    us-east-1 and us-west-2 regions are experiencing networking issues and it also having an affect on a number of other cloud services that rely on those regions. The number of AWS services this is affecting is growing and will probably affect the majority of their services to some degree. It isn't a full network outage, but instead a sporadic one (too much load?). As in, one ECS task will be able to register itself with the application load-balancer, while another one will not. If you have an automated environment, this is causing rolling failures right now. This is impacting one of my clients greatly as those are their primary and DR regions. We are considering deploying DR in a 3rd region, but it could take hours to replicate their database.

    8
    1
    https://opentf.org/fork

    The Github repository for the community fork of Terraform (called OpenTF) has been made public. If you use any third-party tooling (SpaceLift, Scalr, Env0, Terraspace, Terragrunt, Atlantis, Digger, etc.) you will probably want to plan a switch to using OpenTF instead of Terraform to remain license compliant. Well, it is actually more about the third-party tool's compliance. From this point forward, their documentation can't tell you to install a version of Terraform higher then 1.5.5. You will start to see them transition over to suggesting OpenTF instead, once a stable release is available. OpenTF plans to remain feature compatible with Terraform, but I could see, in the future, new features being added to OpenTF that third-party tool providers require. I wouldn't compile and use the current OpenTF code for production or even development use yet, but if you wanted to contribute to the project, now is your chance. https://github.com/opentffoundation/opentf The first stable release should be coming by October 1st. https://github.com/opentffoundation/opentf/milestone/3

    12
    0
    aws.amazon.com

    CloudFormation is the most featureless of the Infrastructure-as-Code template languages. It is miles behind Terraform, Azure ARM/Bicep, and Google Cloud Deployment Manger. I don't think there has been any direct improvement with the language syntax since the introduction of YAML support over a decade ago. The core syntax and functionality of CloudFormation have been frozen for many years and there is no sign that will ever change. From the outside, it appears the CloudFormation team has been under-resourced and left to rot. Support for new resource types and properties can take up to 2 years to get implemented. If you are tied to using CloudFormation and need support for a new resource type or property, you are left with creating and maintaining custom resource types (Lambda Functions). All recent language improvements have been in the AWS::LanguageExtensions transform, which is just an AWS manage CloudFormation Macro, and it was only release last September. CloudFormation Macros are Lambda functions that will run against a template before it is processed. They allow you to interpret your own syntax changes and transform the template before deployment. Before this looping function support, AWS::LanguageExtensions transform didn't contain any functionality that made it compelling to use. If you were already aware of how to extend CloudFormation, you probably already had a collection of CloudFormation Macros that went above and beyond the functionality of the AWS::LanguageExtensions transform. Currently, if you want to do anything more advanced than what is built-in, you have to create and maintain your own CloudFormation Macros (more Lambda Function). They are a pain to debug, add a lot more complexity, and increase your maintenance workload. Having AWS provide a macros that greatly extends CloudFormation into something usable would be awesome. We just aren't there yet, but this update shows there is some life left in the corpse.

    2
    0
    https://medium.com/@DiggerHQ/digger-a-scalable-and-secure-alternative-to-atlantis-for-terraform-automation-and-collaboration-f5ac9103202a

    I have been researching the current state of Terraform automation and collaboration tools on behalf of a client and this is a new one that has emerged as a possible option. The client needs something to help manage their many pipelines and state files, but they are not big enough to need a full enterprise Terraform management platform such as Spacelift, Scalr, or Env0. Atlantis was on the short list, but it is showing its age and this is looking to be a better product and a good middle ground solution. With the recent Hashicorp licensing change, this product may also be impacted. The developers claims they are not using any Hashicorp code and are not affected, but their code does execute a terraform command process which might still run afoul of the "embedded" part of Hashicorp's BSL "Additional Use Grant". Since they are also the creators of the first fork of the MPL licensed Terraform code-base, they will surely be under the watchful eyes of Hashicorp's lawyers.

    2
    0
    www.hashicorp.com

    The recent change in licensing across all Hashicorp products shows that Hashicorp is not able to or willing to compete with competitors to their enterprise offerings. Even though they officially don't state it, the change is targeted at competitors such as Spacelift, Scalr, and Env0. Those competitors only came to be to fill in gaps that remained after and because of Hashicorp's lacklustre and overpriced Terraform Cloud/Enterprise products. The Business Source License (BSL) 1.1 is an open source license, but it has additional vague wording designed to prevent competitors from building competing products using the source code. The problem in this situation is that it also extends to additional products produced by the code owner (Hashicorp). This means even an open-source (non-commercial) competitor to the separate Terraform Enterprise product is not allowed to use the Terraform command, Terraform code-base or any other Hashicorp code-base. Anyone who does any form of Terraform automation, that they then provide to their clients for production use, will now need to ensure they are not seen as a competitor to a Hashicorp product. [Spacelift](https://spacelift.io/blog/hashicorps-license-change) has already tried to reassure their customers that they are going to work on a solution going forward. Even though Hashicorp claims to be supportive of the spirit of open source software, they aren't supportive of open collaboration and they have been resistant to upstream contributions from the community. This resistance has created an environment where new enhancement toolsets were created then evolved into competing products with their enterprise offering. Now that they have changed their licensing, this will further exacerbate the issues. A [fork](https://github.com/diggerhq/open-terraform) of the pre-BSL licensed Terraform code-base has already appeared and if it or another fork gets enough support from the community, we could see the official Terraform toolset being replaced as the defacto Infrastructure-as-Code platform in use today. I myself have created command wrappers and managements to improve on the limitations of the Terraform command and the lack of state file drift management. So I will be watching what happens closely and be willing to offer my contributions to any potential competitor. Additional discussions: [Hacker News: HashiCorp adopts Business Source License](https://news.ycombinator.com/item?id=37081306) [Hacker News: OpenTerraform – an MPL fork of Terraform after HashiCorp's license change](https://news.ycombinator.com/item?id=37088591)

    7
    2
    Anyone have experience setting up an environment on Beanstalk?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    You can use aws iam list-instance-profiles to get a list of what is already created. I suspect there is something else wrong.

    It cloud be looking for the default Beanstalk instance profile and role (aws-elasticbeanstalk-ec2-role) as it isn't auto-created anymore. It could also be a permission issue with the role's policy.

    https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html

    Elastic Beanstalk is one of the few AWS services I haven't used as it just deploys a number of other services and resources behind the scenes. It is more of a up-and-running-quick demonstration tool than something you would use IRL. It can be used, but there are better options.

    2
  • Anyone have experience setting up an environment on Beanstalk?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    An instance profile is what I would call a legacy resource that really shouldn't be needed, but is still there in the background for backwards compatibility. You can't attach an IAM role directly to an EC2 instance. You need to have an instance profile in between that is named the same as the IAM role.

    You can create one using every other interface (command line, CloudFormation, Terraform, SDKs, etc.), but not through the web console (browser). From the web console, you would need to recreate the IAM role and make sure you select EC2 as the purpose/service for the role. Only then will it create a matching instance profile along-side your new IAM role.

    2
  • cycode.com

    cross-posted from: https://programming.dev/post/1562654 > FYI to all the VS Code peeps out there that malicious extensions can gain access to secrets stored by other VS Code extensions as well as the tokens used by VS Code for Microsoft/Github. > > I really don’t understand how Microsoft’s official stance on this is that this is working as intended… > > If you weren’t already, be very careful about which extensions you are installing.

    7
    0
    Neat trick for desoldering many-pin components
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    I picked up a Hakko desoldering gun many years ago to save me from this. It was pricey (~$300), but has been worth it over the years.

    3
  • ### Aqua Trivy 2.9.0 Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues. Tags: Security, Vulnerability Scanner, Monitoring [Website](https://trivy.dev/) - [Documentation](https://aquasecurity.github.io/trivy/) - [Github Home](https://github.com/opensearch-project/OpenSearch) - [Github Release](https://github.com/opensearch-project/OpenSearch/releases/tag/2.9.0) ### CoreDNS v1.11.0 CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. Tags: DNS, Kubernetes [Website](https://coredns.io/) - [Documentation](https://coredns.io/manual/toc/) - [Github Home](https://github.com/coredns/coredns) - [Github Release](https://github.com/coredns/coredns/releases/tag/v1.11.0) ### Go v1.21 Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. Tags: Programming Language, Golang [Website](https://go.dev/) - [Documentation](https://go.dev/doc/) - [Github Home](https://github.com/golang/go) - [Release](https://go.dev/dl/) ### Hashicorp Consul v1.16.x Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. [Website](https://www.consul.io) - [Documentation](https://developer.hashicorp.com/consul/docs) - [Github Home](https://github.com/hashicorp/consul) - [Github Release](https://github.com/hashicorp/consul/releases/) ### OpenSearch 2.9.0 OpenSearch is a community-driven, open source fork of [Elasticsearch](https://en.wikipedia.org/wiki/Elasticsearch) and Kibanasearch. Elasticsearch can be used to search any kind of document. It provides scalable search, has near real-time search, and supports multitenancy. Kibana provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Tags: Search Engine, Dashboards, Monitoring [Website](https://opensearch.org/) - [Documentation](https://opensearch.org/docs/latest/) - [Downloads](https://opensearch.org/downloads.html) - [Github Home](https://github.com/opensearch-project/OpenSearch) - [Github Release](https://github.com/opensearch-project/OpenSearch/releases/tag/2.9.0) ### Podman v4.6.0 Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman runs containers on Linux, but can also be used on Mac and Windows systems using a Podman-managed virtual machine. Tags: Docker, Containers, Command-Line [Downloads](https://github.com/containers/podman/blob/main/DOWNLOADS.md) - [Github Home](https://github.com/containers/podman) - [Github Release](https://github.com/containers/podman/releases/tag/v4.6.0) ### Prometheus 2.46.0 / 2023-07-25 Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed. Tags: Monitoring, Observability, Dashboards, Metrics, Alerting [Website](https://prometheus.io/) - [Documentation](https://prometheus.io/docs/introduction/overview/) - [Downloads](https://prometheus.io/download/) - [Github Home](https://github.com/prometheus/prometheus) - [Github Release](https://github.com/prometheus/prometheus/releases/tag/v2.46.0)

    2
    0
    AWS will charge for public IPv4 addresses soon
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    You would have to use an external tunnel service that will give you an IPv6 address on the internet. As you are sending your traffic through an external provider, it will be slower and they could be monitoring your traffic. Some ISPs even use these tunnelling service to quickly enable IPv6 access.

    Tunnel brokers (RFC 3053) are organizations that provide, often for free, a manually or dynamically configured tunnel that encapsulates your IPv6 packets within IPv4 packets. The IPv6 packets at your home are encapsulated into IPv4 packets and sent across the IPv4-only ISP network to the tunnel broker service. When those packets reach the tunnel broker, they are decapsulated and the IPv6 packets are forwarded to the IPv6 Internet. This method can use a traditional GRE tunnel, an IPv4 protocol 41 tunnel, or might leverage the Tunnel Setup Protocol (TSP) (RFC 5572).

    It is looking like Hurricane Electric (https://www.tunnelbroker.net/) is the only one still providing this service, as far as I can find.

    If you use a VPN that could be another option, if the VPN provider isn't disabling IPv6 out of a potential privacy concern (PIA). Even if the VPN service supports IPv6, most VPN clients do not and your IPv6 DNS queries could get routed to your ISP. If you were using a VPN for privacy concerns, that would expose what websites you are accessing and defeat the purpose of a VPN. That is why VPN providers will sometime go out of their way to ensure IPv6 is disabled when the VPN is in use.

    1
  • AWS will charge for public IPv4 addresses soon
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    It is looking like Canadian ISP support for IPv6 is still patchy. I am on Teksavvy which uses the Shaw network in Alberta and RogShaw doesn't like to provide their struggling micro competitors any perks. I give myself a 4% chances of getting IPv6 support to work.

    If I have time this long weekend, I will try to see if I need to change anything on my Technicolor modem and setup the IPv6 DHCP service on my Mikrotik firewall. My self-managed hardware should support it, my Jekyll and Hyde ISP, probably not.

    Use this to see if you ISP supports the latest 90's technology. https://test-ipv6.com/

    3
  • www.theregister.com

    I was wonder how cloud providers seemed to have a bottomless pits of IPv4 addresses and weren't more resistant to handing them out like candy. They should be charging more for this scarce resource. AWS was, until now, the only cloud provider to not charge for static public IPv4 addresses, as long as the elastic IP is in use. I fully expect there will be more pressure in the future to have cloud customers to use dual-stack (both IPv4 and IPv6) or IPv6 only on externally facing services and pool services behind application load-balancers or web application firewalls (WAFs). WAFs should support sending incoming IP4v and IPv6 traffic to an IPv6 only server. Looking at Imperva's (a WAF) documentation that should work. I haven't tested this, so I might just have to do that. >By default Imperva handles load balancing of IPv4 and IPv6 as follows: > > - IPv4 traffic is sent to all servers. > - IPv6 traffic is only sent to the servers that support IPv6. > - However, if all your servers that support IPv6 are down, then IPv6 traffic is sent to your IPv4 servers. > > Imperva also enables you to configure load balancing so that IPv6 traffic is only sent to IPv6 servers and IPv4 traffic is only sent IPv4 servers. Alternatively, you can configure that Imperva sends traffic to any origin server, regardless of whether it is IPv4 or IPv6. https://docs.imperva.com/bundle/cloud-application-security/page/more/ipv6-support.htm

    6
    5
    last9.io

    Prometheus will soon include support for ingesting OpenTelementry metrics into the platform. Even if you understood all of those words, you might be asking, "so what?". This is a big deal for observability (fancy name for monitoring) as it is getting us one step closer to using a single agent to collect all observability telemetry (logs, metrics, traces) from servers. Currently you would need to use something like fluentbit/fluentd to collect logs, Prometheus exporter for metrics, and OpenTelemetry for traces. There are many other tools you might use instead, but these are my current picks. If you are running VMs or physical servers, that means installing/managing three different pieces of software to cover everything. If you are running containers, that could mean up to 3 separate sidecar containers per application container within the same group/task/pod. OpenTelementry is being positioned as a one-stop-shop for collecting and working with the three streams of telemetry data (logs, metrics, traces). Currently only trace support is production ready, but work is well under way to getting support for logs and metrics to be the ready for prime time. There has been huge moves across the industry to add support for working with OTLP (OpenTelemetry Protocol) data streams. Prometheus is becoming the most popular backend for storing and alerting on metric data. The current blocker has been native support for OTLP ingestion and incompatible metric naming. According to this blog post, we are close to getting these 2 issues resolved. https://last9.io/blog/native-support-for-opentelemetry-metrics-in-prometheus/

    1
    0
    Question about laptop docks and multiple displays.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    It is preferable to have the dock power the laptops. Then there is only 1 cable to plug in. If your personal laptop has a USB-C power, it can probably be powered through it. Plugging it in to you work laptop power supply shouldn't break it as there is a lot of negotiating taking place before power is provided. You may want to search the internets first.

    The Dell docks are also universal and will work. Avoid HP as they are proprietary. Some other brands (Plugable, Anker) work really well, but may not include the power adapter. Make sure you include the power adapter when comparing docks. I would get the new 100W USB-C adapters (UGreen or Anker) that can power your dock, devices, and laptop (by way of the dock).

    I use a mix of Dell and Anker USB-C docks with Dell, HP, and Macbooks and run up to dual 4K displays and power the laptops (The HPs are limited).

    There is a few things to watch out for. You laptop's USB-C port needs to be a Thunderbolt port to work with a Thunderbolt dock. If it isn't, you will need non-Thunderbolt USB-C dock.

    The port needs to support Power Delivery (PD) and may still limit charging to 60W. You should get up to 82W after the dock takes its cut. Some laptops (Dell) support higher charging rates only with their own brand docks. If you are gaming, your battery will drain, just slowly.

    The port should support Displayport even if you are using HDMI. Most docks will have a mix of DP and HDMI. You will need an ACTIVE DP to HDMI adapter. If one of your monitors has DP, use that insteaad of an adapter.

    1
  • aws.amazon.com

    If you use Fargate and have Linux x86 tasks with images over 250Mb, you might be interested in this new feature that should shave time off of your task deployments. One of our clients had just switch all of their tasks over to ARM to cut costs, but they always want a faster deployment pipeline. I might have to give this a try and see if there is a big benefit. I suspect the networking will become the main source of delay as i remember it taking 1-2 minutes to finish.

    0
    0
    Anyone noticing GitHub reliability issues?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    We have a slack channel where we dump a number of cloud/service outage RSS feeds into. Github has always dominated that channel.

    2
  • The following are some tools you can use to perform security scans on your container images and running containers. These are useful for performing manual audits on existing container images, scanning images as part of a build pipeline, or actively monitoring containers running in production. These can all be implemented for free. ## Docker Bench for Security [https://github.com/docker/docker-bench-security](https://github.com/docker/docker-bench-security) The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated, and are based on the [CIS Docker Benchmark v1.5.0](https://www.cisecurity.org/benchmark/docker/). ## Aquasecurity Trivy [https://github.com/aquasecurity/trivy](https://github.com/aquasecurity/trivy) Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues. You can use [https://github.com/aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action) to perform scans within your Github Actions workflows. ## Anchore Grype [https://github.com/anchore/grype](https://github.com/anchore/grype) A vulnerability scanner for container images and filesystems. You can use [https://github.com/anchore/scan-action](https://github.com/anchore/scan-action) to perform scans within your Github Actions workflows. ## Clair [https://github.com/quay/clair](https://github.com/quay/clair) Clair is an open source project for the static analysis of vulnerabilities in application containers (currently including OCI and docker). AWS ECR basic scanning uses this project as its backend. You can use [https://github.com/quay/clair-action](https://github.com/quay/clair-action) to perform scans within your Github Actions workflows. ## Sysdig Falco [https://github.com/falcosecurity/falco](https://github.com/falcosecurity/falco) Falco is a cloud native runtime security tool for Linux operating systems. It is designed to detect and alert on abnormal behaviour and potential security threats in real-time. Generally used for active monitoring with Kubernetes clusters, but you can also use it with [ECS Fargate](https://github.com/aws-samples/aws-fargate-falco-examples). There are others out there, but these are ones I remember at the moment. If you know of any others, please add them.

    4
    0

    ### Github CLI GitHub CLI 2.32.0 gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal next to where you are already working with git and your code. Tags: Github, Git, Management, Version Control, Command-Line [Website](https://cli.github.com/) - [Documentation](https://cli.github.com/manual/) - [Github Home](https://github.com/cli/cli) - [Github Release](https://github.com/cli/cli/releases/tag/v2.32.0) ### Gradle 8.2 Gradle is a build tool with a focus on build automation and support for multi-language development. If you are building, testing, publishing, and deploying software on any platform, Gradle offers a flexible model that can support the entire development lifecycle from compiling and packaging code to publishing web sites. Gradle has been designed to support build automation across multiple languages and platforms including Java, Scala, Android, Kotlin, C/C++, and Groovy, and is closely integrated with development tools and continuous integration servers including Eclipse, IntelliJ, and Jenkins. Tags: Deployment, CI/CD [Website](https://gradle.org/) - [Documentation](https://docs.gradle.org/current/userguide/userguide.html) - [Releases](https://gradle.org/releases/) - [Github Home](https://github.com/gradle/gradle) ### Microsoft Azure CLI Azure CLI 2.50.0 Bicep is a Domain Specific Language (DSL) for deploying Azure resources declaratively. It aims to drastically simplify the authoring experience with a cleaner syntax, improved type safety, and better support for modularity and code re-use. Bicep is a transparent abstraction over ARM and ARM templates, which means anything that can be done in an ARM Template can be done in Bicep. Tags: Azure, System Administration, Management, Command-Line [Installation](https://learn.microsoft.com/en-ca/cli/azure/install-azure-cli) - [Reference](https://learn.microsoft.com/en-ca/cli/azure/reference-index?view=azure-cli-latest) - [Github Home](https://github.com/Azure/azure-cli) - [Github Release](https://github.com/Azure/azure-cli/releases/tag/azure-cli-2.50.0) ### OpenTelemetry Collector v0.81.0 The OpenTelemetry Collector offers a vendor-agnostic implementation on how to receive, process and export telemetry data. In addition, it removes the need to run, operate and maintain multiple agents/collectors in order to support open-source telemetry data formats (e.g. Jaeger, Prometheus, etc.) to multiple open-source or commercial back-ends. Tags: Monitoring, Observability, OpenTelemetry, Collector, Traces, APM, Metrics, Logs [Website](https://opentelemetry.io/) - [Documentation](https://opentelemetry.io/docs/collector/getting-started/) - [Github Home](https://github.com/open-telemetry/opentelemetry-collector) - [Github Release](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.81.0) ### Podman Desktop v1.2.0 Podman Desktop is a graphical interface that enables application developers to seamlessly work with containers and Kubernetes. Tags: Docker, Containers, Desktop [Website](https://podman-desktop.io/) - [Documentation](https://podman-desktop.io/docs/intro) - [Downloads](https://podman-desktop.io/downloads) - [Github Home](https://github.com/containers/podman-desktop) - [Github Release](https://github.com/containers/podman-desktop/releases/tag/v1.2.0) ### SigNoz v0.23.0 Monitor your applications and troubleshoot problems in your deployed applications, an open-source alternative to DataDog, New Relic, etc. Tags: Monitoring, Observability, OpenTelemetry, Traces, APM, Metrics, Logs [Website](https://signoz.io/) - [Documentation](https://signoz.io/docs/) - [Github Home](https://github.com/SigNoz/signoz) - [Github Release](https://github.com/SigNoz/signoz/releases/tag/v0.23.0) ### Terraform Provider - AWS v5.8.0 The AWS Provider allows Terraform to manage [AWS](https://aws.amazon.com/) resources. [Terraform](https://www.terraform.io/) is a tool for building, changing, and versioning infrastructure safely and efficiently. Tags: AWS, Orchestration, Programming, Terraform [Documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) - [Github Home](https://github.com/hashicorp/terraform-provider-aws) - [Github Release](https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.8.0) ### Terraform Provider - AzureRM v3.65.0 The AzureRM Terraform Provider allows managing resources within [Azure Resource Manager](https://azure.microsoft.com/en-us/). [Terraform](https://www.terraform.io/) is a tool for building, changing, and versioning infrastructure safely and efficiently. Tags: Azure, Orchestration, Programming, Terraform [Documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) - [Github Home](https://github.com/hashicorp/terraform-provider-azurerm) - [Github Release](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v3.65.0)

    0
    0

    ### Prometheus 2.45.0 / 2023-06-23 Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed. Tags: Monitoring, Observabilty, Dashboards, Metrics, Alerting [Website](https://prometheus.io/) - [Documentation](https://prometheus.io/docs/introduction/overview/) - [Downloads](https://prometheus.io/download/) - [Github Home](https://github.com/prometheus/prometheus) - [Github Release](https://github.com/prometheus/prometheus/releases/tag/v2.45.0) ### SigNoz v0.21.0 Monitor your applications and troubleshoot problems in your deployed applications, an open-source alternative to DataDog, New Relic, etc. Tags: Monitoring, Observability, OpenTelemetry, Traces, APM, Metrics, Logs [Website](https://signoz.io/) - [Documentation](https://signoz.io/docs/) - [Github Home](https://github.com/SigNoz/signoz) - [Github Release](https://github.com/SigNoz/signoz/releases/tag/v0.21.0) ### Terraform Provider - AWS v5.5.0 The AWS Provider allows Terraform to manage [AWS](https://aws.amazon.com/) resources. [Terraform](https://www.terraform.io/) is a tool for building, changing, and versioning infrastructure safely and efficiently. Tags: AWS, Orchestration, Programming, Terraform [Documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) - [Github Home](https://github.com/hashicorp/terraform-provider-aws) - [Github Release](https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.5.0)

    1
    0
    Inside a Shimano Alfine internally-geared bicycle hub
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    Aligning the Alfines involves putting them into 4th gear and turning the fine adjustment on the shift until the yellow lines on the hub line up. It can easily be done every time you plan to use the bike. It is just easy to forget to do.

    Alfine alignment

    As for dealing with wheel re-installation and shifter cable reattachment , I customized a spanner to perfectly fit between the hub and the dropouts to either hold the hub in place while tightening the axle nuts or loading tension when installing the when reconnecting the shifter cable. I can't remember which. It might have been both.

    Alfine spanner

    After having failure issues using an Alfine 11 with a 29+ MTB (high torque setup), I have used nothing but cassette based drivetrains on everything but my folding bike. Also, my only local hub re-builder has stopped providing that service and Universal Cycles stopped carrying replacement internals, so I will probably not use an Alfine IGH on any new bikes.

    1
  • Inside a Shimano Alfine internally-geared bicycle hub
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    I have an Alfine 11 and 8 still mount in wheelsets for my folding bike and I think I have another Alfine 11 in parts. The Alfine 8 is bomb proof; the 11 much less so. If you religiously keep them in alignment they will last. If you ride on them while misaligned, you will wear down the pawls and start loosing gears. After that you need to do a rebuild and very few places do that.

    Also, avoid using them in 29"wheels. It is too much torque for them, and the pawls will wear down within a year or two.

    They are stupid fast for up shifting as you can do it under some load and the shifting is instant. The other nice perk is that you can downshift while stopped. No need to lift the back wheel and let it spin.

    Winter commuting is where they shine. I have to replace my whole drivetrain every winter on my cassette bikes. The IGHs are completely unaffected by the abuse. The grease/oil lubricant used inside does start to thicken up and add some resistance starting out, but that only happens below about -7c.

    If you need the gear range of the Alfine 11 I would suggest considering a Rohloff speedhub instead, but they are 3 times the price.

    4
  • TekSavvy puts itself up for sale amid industry turmoil
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    I had Rogshaw come to the door yesterday pushing their merger high-speed promo. Hopefully Teksavvy doesn't get gobbled up by the three-headed Robelus. Sad day indeed.

    15
  • Bike Repair Stand Recommendations
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    The YC-100BH Repair Stand that @Showroom7561@lemmy.ca has is probably your best/cheapest bet. I use a much more pricey Feedback Sports stand, but I have to deal with a steel frame fat bike with 5" tires.

    2
  • ### Fluent Bit 2.1.5 Fluent Bit is a fast Log Processor and Forwarder for Linux, Windows, Embedded Linux, MacOS and BSD family operating systems. Tags: Monitoring, Observability, Logs [Website](https://fluentbit.io/) - [Documentation](https://docs.fluentbit.io/manual/) - [Github Home](https://github.com/fluent/fluent-bit) - [Github Release](https://github.com/fluent/fluent-bit/releases/tag/v2.1.5) ### Hashicorp Vault v1.14.0 Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. Tags: Security, Secret Store [Website](https://www.vaultproject.io) - [Documentation](https://www.vaultproject.io/docs/) - [Downloads](https://developer.hashicorp.com/vault/downloads?product_intent=vault) - [Github Home](https://github.com/hashicorp/vault) - [Github Release](https://github.com/hashicorp/vault/releases/tag/v1.14.0) ### OpenTelemetry Collector v0.80.0 The OpenTelemetry Collector offers a vendor-agnostic implementation on how to receive, process and export telemetry data. In addition, it removes the need to run, operate and maintain multiple agents/collectors in order to support open-source telemetry data formats (e.g. Jaeger, Prometheus, etc.) to multiple open-source or commercial back-ends. Tags: Monitoring, Observability, Collector, Traces, APM, Metrics, Logs [Website](https://opentelemetry.io/) - [Documentation](https://opentelemetry.io/docs/collector/getting-started/) - [Github Home](https://github.com/open-telemetry/opentelemetry-collector) - [Github Release](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.80.0)

    1
    0

    ### Ansible 2.15.1 Ansible is a radically simple IT automation system. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. Ansible makes complex changes like zero-downtime rolling updates with load balancers easy. More information on the Ansible [website](https://ansible.com/). Releases: https://github.com/ansible/ansible/releases ### Hashicorp Vault 1.13.4 Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. Downloads: https://developer.hashicorp.com/vault/downloads?product_intent=vault ### Node.js 20.3.1 Node.js is an open-source, cross-platform JavaScript runtime environment. Downloads: https://nodejs.org/en/download ### Terraform Provider - Google (GCP) 4.70.0 The Terraform Google provider is a plugin that allows Terraform to manage resources on [Google Cloud Platform](https://cloud.google.com/). [Terraform](https://www.terraform.io/) is a tool for building, changing, and versioning infrastructure safely and efficiently. Documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs Releases: https://github.com/hashicorp/terraform-provider-google/releases

    1
    0

    Storing usernames, passwords, IDs, and secrets within a text-based config file is still common practice practically everywhere. This was a requirement about a decade ago, but now there are better ways to avoid this practice and switch to something more secure. #### First step Grant identity to your VM instances, containers, and functions. **AWS:** Attach IAM Instance profiles or IAM roles your EC2 instances, ECS tasks, EKS pods, or Lambda functions. **Azure:** Add a system-assigned Managed Identities to, everything. There isn't a reason why this isn't the default practice. Do avoid using user-assigned Managed Identities, though. **GCP:** Attach user-managed service accounts to anything that supports them. Once that is done, you can grant permissions to anything running on those VM instances or containers using IAM roles. #### Second Step Work on removing any stored credentials you were using to access services within the cloud environment. Now that the resources have been granted identity directly, they aren't needed anymore. **AWS:** There isn't anything else to do. The pre-installed agent will maintain a rolling set of temporary access keys that are stored in environment variables. **Azure:** You will now need to switch to logging in using identity instead of passing in credentials. > *Examples:* - CLI: az login -identity - Code: Use ManagedIdentityCredential instead of DefaultAzureCredential. **GCP:** You don't need to do anything. The Application Default Credentials will use the assigned service account automatically. #### Third Step Move any remaining secrets to SSM parameter store, Secret Manager, or a KeyVault. Now grant your identity access to the secrets and add some code to your app to pull in the secrets at startup. Now those secrets exist in memory instead of written to disk. --- That covers granting access within your cloud environment. Now we are going to expand this practice outside of your cloud's walled garden. ### Workload Identity Federation Workload Identity Federation is a newer term that represents the ability to allow an internet attached workload to authenticate and access resources provided by another internet attached service or workload using existing identity provided to the workload. In other words, granting access to something outside of your environment, without using a separate set of stored credentials, to access stuff in your environment. The source workload can be a resource in a public cloud, an internet-based software service, or even a custom application running on-premise. A common example of this is allowing your deployment pipeline (Github, Gitlab, Azure Devops, or Bitbucket) to deploy new resources without having to store a set of credentials. You effectively allow the 3rd party service access to use a role (AWS), app registration (Azure), or service account (GCP) within your cloud environment if certain criteria are met. It is even possible to use this form of authentication and authorization from either of the three main public clouds (AWS, Azure, and GCP) to each other. I have examples of how this is done if anyone is interested. ### OpenID Connect This new method of workload authentication and authorization makes use of OpenID Connect. OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol, that has become the default for authorizing end-users. This new layer allows target platforms to verify the identity of a source user or workload based on the authentication performed by a web service, the Authorization Server. It is accessed by way of a predetermined URL that the is known to the target identity platform. During the process, basic information is passed on to the target platform to aid in verifying the workload’s identity. With traditional secret based credentials, if they get compromised, they could be used from anywhere in the world. To try and control their use, network restrictions are put in place such as IP address whitelists. Using IP whitelists with public cloud providers is very problematic, bordering on impossible, as all potential IPs are not publicly available or are in a constant state of change. Workload Identity Federation reduces the need for IP address whitelists for additional network security. This is because the authentication is locked down to the source identity and can’t be used outside of the constraints enforced by the source identity provider. If the identity is directly assigned to a workload, only that workload can use it. NOTE: Instead of trusting set of shared credentials you will now be trusting the application or workload. Make sure the workload you are trusting is trustworthy. ### Warnings **Azure:** If you use a shared identity (User-Assigned Managed Identity) then anything with contributor access to the managed identity can generate access tokens and impersonate the identity. If you need to use a shared identity, store it in a separate resource group that only security admins have access to. **GCP:** Do not generate user-assigned keys for your service accounts as they will allow others to impersonate the service account. It also completely defeat the original purpose of doing away with credentials.

    1
    0
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBO
    Board Games 1y ago
    Jump
    What are the new gateway games?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    Azul and its variations are the best match for a recent hit that is approachable.

    Wingspan would be a recent game that has blown up, but there is a little learning curve.

    Though not new, Dixit would be a game that is very approachable and that has been very popular. It is my go-to when gift giving. "The Game" (I know, terrible name) is another older one that people want to get after I introduce them to it.

    3
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBO
    Board Games 1y ago
    Jump
    Favourite 2 player game?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLX
    lxpw
    1y ago 100%

    Based on your list, would suggest Calico or Azul: Summer Pavilion. I prefer the Summer Pavilion edition over the original edition of Azul.

    You may also like Codenames: Duet, but it is not a talkative game.

    If you don't mind abstract I would suggest YINSH or any of the other project GIPF games.

    Local specialty board game shops should have those. If not, the best place in Canada to find board games is https://www.boardgamebliss.com/. They just run out of stock, a lot.

    1
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearTE
    Terraform lxpw 1y ago 100%
    Terraform provider for AWS 5.4.0 released
    github.com

    cross-posted from: https://lemmy.ca/post/692807 ### FEATURES: - New Data Source: aws_organizations_policies (#31545) - New Data Source: aws_organizations_policies_for_target (#31682) - New Resource: aws_chimesdkvoice_sip_media_application (#31937) - New Resource: aws_opensearchserverless_collection (#31091) - New Resource: aws_opensearchserverless_security_config (#28776) - New Resource: aws_opensearchserverless_vpc_endpoint (#28651) ### ENHANCEMENTS: - resource/aws_elb: Add configurable Create and Update timeouts (#31976) - resource/aws_glue_data_quality_ruleset: Add catalog_id argument to target_table block (#31926) ### BUG FIXES: - provider: Fix index out of range [0] with length 0 panic (#32004) - resource/aws_elb: Recreate the resource if subnets is updated to an empty list (#31976) - resource/aws_lambda_provisioned_concurrency_config: The function_name argument now properly handles ARN values (#31933) - resource/aws_quicksight_data_set: Allow physical table map to be optional (#31863) - resource/aws_ssm_default_patch_baseline: Fix *conns.AWSClient is not ssm.ssmClient: missing method SSMClient panic (#31928)

    2
    0