From Chaos to Clarity: The Essential Guide to Personal Knowledge Management

Are you tired of constantly struggling to remember important information? Do you feel overwhelmed by the endless stream of inputs coming at you every day? You’re not alone. In today’s digital world, we’re bombarded with an average of 74 GB of data per day - that’s the equivalent of watching 16 movies! (source)

But what if there was a way to take control of all this information and use it to your advantage? Enter Personal Knowledge Management (PKM).

📔 What is Personal Knowledge Management?

(Source: RoamBrain.com)

PKM is the active and intentional process of managing the information that comes your way on a daily basis. It involves finding, organizing, and storing information in a way that makes it easy for you to access and use when you need it. It also involves using various tools and techniques to help you understand, analyze, and apply the information you’ve collected.

🤔 Why should you care about PKM?

(Source: jasongilbertson.com)

In this blog post, we’ll explore the benefits of PKM and how it can help you create clarity, build a “second brain,” and improve your efficiency and effectiveness. PKM is an ongoing process that requires discipline and effort to maintain, but it can provide significant benefits, including:

  • Connecting ideas for new insights. Atomic bi-directional notes allow you to connect ideas and make new insights possible. Insights don’t happen in a vacuum - they’re the result of making new connections.

  • Creating future value. Even if you don’t use your notes for a current project, you’re preparing knowledge for future projects. Just as data gives organizations an advantage, your personal repository of knowledge will give you a leg up over time.

  • Tackling complex problems. It can be tough to keep all the balls in the air when you’re tackling complex problems. Implementing a PKM system allows you to focus on a small part of the problem and then step back and look at it with a bird’s eye view.

🔢 “Levels” of P.K.M

Are you already a pro at taking notes? That’s great, but there are different levels to this practice. You can evaluate where you are in your PKM journey using this guide from Tiago Forte’s The 4 Levels of Personal Knowledge Management:

Level 1: Storing Information

  • You use apps to receive, edit, and send information.
  • You organize your files using default folders and subfolders.
  • You take notes on your smartphone or mobile device for practical tasks or occasionally from sources such as books or podcasts, but don’t do much with them.

Level 2: Managing Knowledge

  • You capture ideas and creative inspiration from both your own thoughts and external sources.
  • You use digital note-taking as a significant part of your daily life, capturing information from a wide variety of sources.
  • Your notes begin to work as a thought partner, reminding you of things you’ve forgotten and surfacing connections between ideas.
  • You refine your knowledge management tools and perform small experiments to discover better ways of doing things.

Level 3: Enabling Action

  • You shift your focus from ideas to action, using your insights to tangibly improve your learning, health, career, business, and society.
  • You become more selective about the information you consume, preferring only high-quality, substantive sources that relate to your goals.
  • You use your notes to take on more complex projects and become more productive, creative, and relaxed.
  • You apply creativity to the workings of your system and integrate it deeply into your thinking for leverage.
  • The benefits of your system extend to others, such as through a website, blog, social media feed, podcast, or product.

Level 4: Personal Knowledge Mastery

  • You have achieved a high level of proficiency in managing your personal knowledge.
  • You are able to use your knowledge effectively to achieve your goals and are constantly learning and adapting to new information and situations.
  • You regularly review and revise your system to ensure it is still effective and efficient.
  • Your system becomes a part of your identity and personal brand, and you are able to share your knowledge and insights with others in a way that is meaningful and impactful.

🤓 Building Your PKM System

Now you may be thinking, alright, how do I level up my game? What do I need to do? First things first, you need the right tool for the job.

NoteApps.Info(Source: NoteApps.Info)

As you probably know, there are tons of apps in the market. Which one do you choose? Is there even a difference?

The app you choose is going to determine how successful you are going to be in organizing your knowledge. You don’t want something super complex where all you think about the tool and not why you are using it and on the other hand you don’t want something so simple that it lacks the essentials.

🧐 Understand Your Needs

In order to narrow down the search out of the sea of apps, I recommend thinking about a set of requirements that are important to you based on your usage and thinking patterns. For example, consider the following questions:

  • How do you prefer to take notes? Do you write in bullet points or prose?
  • Do you care about referencing other notes and seeing backlinks?
  • Do like the concept of a daily planner?
  • Do you care about consolidated notes, calendar, and task management?
  • Will you use it on the go and on multiple devices?

For me, thinking through some of those questions yielded the following list of requirements:

🔗 Requirement #1: Bi-directional - The ability to create connections between different blocks of knowledge and easily reference and backlink them.

Image description

📋 Requirement #2: Outline-based - The ability to organize thoughts and ideas in a hierarchical, bullet point format.

Image description

📅 Requirement #3: Temporal - The ability to take daily notes and keep a journal of events and activities.

Image description

Requirement #4: Action Oriented - The ability to create and track tasks, ensuring that important actions don’t fall through the cracks.

Image description

📲 Requirement #5: Multi-device syncing - Mobile support for on-the-go note taking as well as multi-device syncing.

Image description

🧪 Experiment

A good way to narrow down your options is to use a review site like NoteApps.Info to compare different tools and see which ones seem most promising.

Once you’ve identified a few tools that you’d like to try, it’s a good idea to give each one a test run for a few days to see how it fits into your daily routine. Keep in mind that it may take some trial and error before you find a tool that feels productive and comfortable to use.

For example, I tried all of the following tools for a few days before finding one that worked for me:

  • Roam Research: This is the leader in the space with a cult following. It offers powerful bi-directional linking and hierarchical outlining capabilities. No free version.
  • Obsidian: Similar to Roam but with a ton of extensibility and plugin ecosystem. Multi-device syncing is a paid add-on.
  • RemNote: Similar to Roam but slightly bent towards students and educational use-cases.
  • logseq: Essentially an open-source version of Roam focused on privacy and data portability. Lacking robust multi-device syncing.
  • Amplenote: Most low-key out of the ones. Has the most essential features with robust multi-device syncing and mobile support. No plugins or ability to customize the UX. Generous free tier.

I ultimately decided to stick with Amplenote given it checked all my requirements and has proven to be super reliable for me.

💡 Strategies for Success

So now that you know the benefits of PKM, how do you get started and make it stick? Here are some strategies that can help you achieve success:

  • Start small and focus on one area at a time. It can be overwhelming to try to tackle everything at once. Start with one area of your life, such as taking meeting notes, remembering ideas, writing blog posts, planning a vacation, etc and build from there.
  • Find a tool that works for you. There are many different tools available for PKM, from simple note taking apps to more advanced knowledge graph apps. Take the time to experiment with different options and see what sticks.
  • Use your notes to reflect on your learning and progress. PKM isn’t just about capturing and organizing information - it’s also about making use of it to improve your learning and development. Take time to review and reflect on your notes regularly, and use them to identify patterns, make connections, and track your progress over time.
  • Be consistent and make PKM a habit. It takes time to build a strong PKM system, so be patient and consistent in your efforts. Make it a habit to capture and organize information on a regular basis.

📑 Additional Resources & Examples

  • How to Build a System for Lifelong Learning - I highly recommend this post by Jason Gilbertson where he takes you through his knowledge management system to serve as an inspiration.
  • Building a Second Brain - If you prefer something more guided and structured look no further than Tiago Forte’s BASB course.
  • Andy’s Notes - One of the most remarkable examples out there. Andy uses the concept of atomic, evergreen notes, and combines it with a fun way to navigate the content as you open up each note like a book. Each note title = hyperlink.
  • KasperZutterman/Second-Brain - A curated list of awesome Public Zettelkastens 🗄️ / Second Brains 🧠 / Digital Gardens 🌱

📣 Share The Knowledge

I hope you found this post helpful and thought-provoking. If you have any additional tips or strategies for effective Personal Knowledge Management, please share them in the comments below. And if you found this post valuable, please consider sharing it on social media or tagging a friend or colleague who might benefit from reading it.

How To Own Your Growth As A Software Engineer

An engineer’s journey from a junior to a lead is full of learnings and requires frequent adaption and agility to succeed. But, as this Tweet mentions, what ends up needing most of the time and focus at each engineering level tends to be different.

What got you here will not get you there.

The best and most successful engineers realize this and adapt to meet the need at the moment rather than clinging to what got them so far. They are what Liz Wiseman calls Impact Players — a book and a mental model I highly recommend.

In this post, I will take you through a mental model of thinking about your growth as an engineer, techniques to advocate for yourself, and advice on organizing your thoughts and communicating well.

The Rigor/Relevance Framework®

So the first question that you may have is, how should I operate? How do I know when to focus on learning, building, or something else?

Here comes a structured tool you can leverage called The Rigor/Relevance Framework®, developed by the International Center for Leadership in Education to examine curriculum, instruction, and assessment.

The Rigor/Relevance Framework®

As you might guess, they typically use this framework in education settings to chart a student’s growth of knowledge and how they apply it. It is an excellent lens to view an engineer’s growth because great engineers are lifelong learners who follow a similar trajectory!

You can use this framework as a guide to track where you are now and where you might go next as a software engineer. We can examine the expectations at each level and how you can grow your impact and reach the next one.

Quadrant A (Acquisition) — Junior Engineer

You are an early stage engineer as an intern, your first job out of school, or switching industries.

Your manager will probably expect you to gather and store bits of knowledge and information. They will also expect you to remember or understand this acquired knowledge. Your team mentor, manager, or tech lead will guide you in completing your tasks.

You can exceed expectations if you understand and leverage existing patterns & practices, as well as execute tasks with minimal guidance. For more, see Samuel Taylor’s brilliant advice on how to join a team and learn a codebase.

Quadrant B (Application) — Mid-Level Engineer

You are a mid-career engineer with a few years of experience and maybe in your second or third job as an engineer.

Your manager will expect you to use acquired knowledge to solve problems, design solutions, and complete work more or less independently.

You can exceed expectations if you can quickly ramp up and contribute to a new team/domain and apply your knowledge to new and unpredictable situations.

Quadrant C (Assimilation) — Senior Engineer

You are a veteran engineer who has worked in the industry for about a decade. You are the driver of your work streams and a mentor to others.

Your manager will expect you to extend and refine your acquired knowledge to expertly and routinely analyze and solve problems and create unique solutions.

You can exceed expectations by consistently stepping into a leadership role and helping your team take complex projects to the finish line by working closely with stakeholders, identifying solutions, defining the scope, estimating timelines, and measuring success. You know how to be what Keith Rabois calls a Barrel.

Quadrant D (Adaptation) — Staff+ Engineer

You are a technical craftsperson and have been in leadership roles in the industry. You typically lead teams and companies on strategic and mission-critical initiatives. In addition, you are a compassionate mentor that can guide individuals and teams to achieve greater success.

Your manager will expect you to think in complex ways and apply your acquired knowledge and skills. However, even when confronted with perplexing unknowns, you can use your experience to create solutions and take action that further develops your skills.

You can exceed expectations by practicing imagination, prioritizing the right things at the right time, and executing your vision by inspiring and leading others. See Will Larson’s guides for reaching and succeeding at Staff-plus roles for a deep dive.

The map is not the territory.

Organizations essentially split those four quadrants above into different levels at varying granularity, but the core concepts usually remain the same. Therefore, you can think of the above levels as a map for your career progression.

Although these leveling guides are helpful, it is essential to remember that even the best maps are imperfect. In mathematician Alfred Korzybski’s words, the map is not the territory. That’s because they are reductions of what they represent. If a map were to describe the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be helpful to us.

That is all to say, don’t read the leveling guides too literally or get fixated on ticking the boxes. Instead, take the time to explore your interests, learn and savor the journey. Personally, I took several detours along the way, such as pursuing entrepreneurship and switching back and forth from IC to management to maximize my learnings & impact rather than worrying about titles or showing linear growth on my resume. For more, see Charity Majors’ thoughts on engineering levels.

Also, it’s essential to remember that titles are usually a byproduct of all your achievements. They are a lagging indicator of your impact. Therefore, you should never wait for titles to work on your skills and find your flow state. Psychologist Mihaly Csikszentmihalyi describes the flow state as:

a constant balancing act between anxiety, where the difficulty is too high for the person’s skill, and boredom, where the difficulty is too low

Do you have the right opportunities, mindsets, and support to achieve that flow state and create a greater impact? If not, what needs to change?

Be your advocate

Following and meeting the expectations of a leveling guide alone sometimes is not enough. It’s also crucial that you become your advocate and authentically tell your story. For one, make sure you have a way to remember, celebrate, share, and learn from all your accomplishments.

Reflecting on your journey will help you spot trends, key strengths, and areas of improvement. I’ve found it easier to jot down my thoughts right after passing a milestone, such as at the end of the quarter or after completing a project. I’ve also found it helpful to track my time using tools like Toggl, RescueTime, etc., to understand and visualize my time pie.

You can use those reflections as a springboard to share your achievements & insights more broadly. I know that may not always come naturally to some engineers, but putting yourself out there will help others, amplify your work and unlock new opportunities. It is also a great opportunity to practice gratitude and recognize others around you.

Related to the points above, see Julia Evan’s excellent blog post on how to get your work recognized and Jason Roberts’s blog post on increasing your luck surface area.

Communicate well

Another critical skill often overlooked is an engineer’s ability to write and communicate. Make sure you invest in that skill as early as possible in your career. It will pay long-term dividends unlike any other. In fact, your ability to advocate for yourself hinges on your ability to communicate well.

Clear and concise communication will also help you think through complex concepts more clearly. It is also a huge plus when working at a company that values written communication, such as Amazon’s, with a culture of six-page memos, which Jeff Bezos talked about in one of their shareholder letters.  

To build this skill gradually, I’ve used Personal Knowledge Management (PKM) tools, such as Notion, Roam, Remote, etc., to create a daily writing practice and a way to organize thoughts. You can then leverage those raw thoughts and outlines to write polished documents. Set a goal for yourself to write a certain amount of external blog posts or internal memos. Start small, maybe 2 - 3 per year, and slowly increase the cadence.

For more on writing well, check out Heinrich Hartmann’s Writing for Engineers guide and Michael A. Covington’s presentation on How to Write More Clearly, Think More Clearly, and Learn Complex Material More Easily.

Final thoughts

To recap, the core tenets of owning your growth as an engineer covered here were:

  • What got you here will not get you there; do the job that’s needed of you
  • Understand what it’ll take to exceed expectations and have a plan to outperform
  • Advocate for yourself through storytelling and sharing your work
  • Learn to organize your thoughts and communicate articulately

I would love to hear if that resonates and if there is any other advice you’ve found to be pivotal in your growth journey.


This post was originally published on the BetterUp Product Blog.

The Art Of Crafting Effective Pull Requests

Pull Request

Pull Requests (PRs) are a great way of submitting contributions to a project, especially when there are multiple developers working on it at the same time. If done well, they are a medium through which we can receive feedback and increase visibility for the changes we are shipping.

There are many elements to what makes for a good PR. They end up determining the quality of the end deliverable as well as telling a story of how a certain feature or change came about.

In this post, I’ll cover some of the basics of what to optimize for at each stage of the PR’s journey to production. My hope is that you’ll be able to use this to maximize feedback quality and minimizing time to merge. So let’s dive in.

Scoping

The first step in the journey to crafting a PR is to create a separate branch for your changes. At this stage, scoping your changes is key. First, think of how big of a change-set you want to introduce in a single PR. Try to address one issue or build one feature within the PR. The larger it is, the more complex it is to review and the more likely it will be delayed. Remember that reviewing PRs is taking time from someone else’s day!

Once you’ve settled on the scope, ensure that each change is broken up into small logical commits. Having small commits is valuable because of several reasons. First, it allows the reviewer to see the progression of the changes and understand the PR in its current state. It is also very valuable if you are using git bisect to debug an issue or trying to roll back a change in the future.

Drafting

The next step in the journey is to open up a draft PR. This is not always necessary but it’s useful if you want to see the full CI run with all the tests, assuming you are not running the entire test suite locally. This will give you early signals on whether your changes are interacting with the rest of the codebase in unexpected ways or not.

Take a moment to review your changes and see if everything makes sense from the perspective of the reviewer. If you are satisfied with the state of the PR, you could tag reviewers for early feedback if necessary.

If you do decide to tag reviewers, ask for specific feedback, calling out the fact that the PR is still work-in-progress. Guide your reviewers with what you want feedback on. This will ensure they don’t spend time and energy giving you feedback on things that are likely to change, or things that you are already aware of.

Publishing

If your draft PR is in good shape, you are now ready to tag reviewers. When it comes to picking reviewers, pick no more than 2 - 3 primary reviewers. If you tag a large number of reviewers, chances are you might see a bystander effect.

Before tagging reviewers, ensure that:

  • Tests are all green. You don’t want the reviewers to spend their valuable time pointing out the basics.
  • You’ve clearly articulated the purpose of the PR as well as the broader context for the change. Link back to relevant Slack threads or tickets. This is important even if you think the reviewers will have the full context for the sake of posterity.
  • You’ve done a self-review. I would recommend running through Maslow’s pyramid of code review to see if your work meets those criteria. You should be the first person to critically review your work before anyone else does.
  • If there are specific follow-ups that you have in mind, describe those as well so the reviewers know what to expect down the line.
  • You have provided related artifacts to the PR in the form of screenshots, videos or logs. This will help your reviewers better understand the impact of your changes.

Feedback

When you receive feedback from the reviewers keep in mind:

  • Tone can easily get lost in async communication. So assume the best intent and don’t take the feedback personally. Offer clarification and context for your decisions.
  • If something isn’t clear, ask clarifying questions, don’t try to read the tea leaves. If things are still not clear in async conversation, request some time to chat on a call. Once you’ve had a chance to discuss, circle back and summarize the discussed points in the PR to preserve context.
  • Whether you decide to take the feedback or not, be sure to acknowledge the feedback you’ve received. This demonstrates that you value the reviewers’ time and effort.
  • Explicitly re-request review once you are done addressing the feedback. Don’t assume the reviewer will know when you are done. Bonus points if you follow-up with a comment pointing out the specific commit that addressed their feedback.

Releasing

Once you have gone through the feedback cycles and gotten the approval of reviewers remember to come back and merge your PR. The longer you wait, the greater the chances of encountering merge conflicts or regressions. So be sure to merge as quickly as possible.

If you are releasing a PR that will perform a large data operation or change something in the critical path, ensure that you are available to follow that PR through the release process and mitigate any errors or hiccups along the way.

If all goes well and your changes are in production, remember to follow-up with the stakeholders and update them on the progress.

This post was originally published on the BetterUp Product Blog.

Techniques for Effective Software Development Effort Estimation

Estimation

How long do you think it’ll take?

As builders and creative people, we are all too familiar with that question. Getting estimates right is incredibly difficult and it’s a skill that we learn slowly over time as we gain more experience building and shipping projects.

So why is this simple exercise so difficult? Oftentimes it’s because we forget to ask the right questions and make assumptions that may not be correct. Let’s examine what are these questions that we should be asking and break them down into phases.

Scoping — What is being requested and when is it needed?

Don’t assume what you think of as “done” is the same as what the party asking for an estimate would call, “done”. It is important to explicitly call out the timeline and specific deliverables before doing the exercise of estimation.

So part of that is first understanding what you are being asked to estimate. Make sure what you have in mind is an acceptable outcome for the stakeholder. If you don’t already have it, make a list of user personas and stories to align on the requirements with the stakeholder and decide on what will be in scope.

Secondly, understand the user group that should be targeted as part of the delivery timeline. For example, will the product be shipped in phases such as internal, friends & family, early access, general availability, etc? If so what does our estimate aim for? Be explicit about which release phase you are estimating for.

Technical Exploration — How will it be built?

To provide a good estimate there has to be some level of understanding of the existing system and how to go about making changes in it.

You can never know exactly all the steps you may need to take but there has to be a certain degree of confidence. Anything below 70% confidence would warrant a technical exploration or a spike to get a better understanding of the required effort.

If you are going to touch a particular aspect of the system, take the opportunity to leave it in a better state than you found it in. This is a good time to identify if there are any long-standing hotspots or technical debt that could be addressed as part of this task. Even small incremental improvements will help keep the system maintainable over the long run.

Capacity Planning — What is the level of effort?

The next step in formulating an estimate is to get a handle on the capacity. For example, based on the technical exploration, you may think something might require one week of effort. This is the most common step where the estimation effort derails.

We are not done. We still have to further refine that and ask, “Is it one week of an average engineer’s time? Or is it specifically your time?”

If you are estimating for yourself, does that account for all the meetings you have to attend? Are there any holidays coming up? Do you have any other competing priorities or commitments? Estimate the time you may have to focus on those things and add that to the estimate.

Also, does this account for time to deal with any potential hiccups or areas of high ambiguity that you may still have to be fleshed out? Figure out your confidence level after the technical exploration, then account for some additional time based on the percent of ambiguity that remains. It might be helpful to go through a Risks & Mitigations exercise here where you can list out all the areas of risks and potential actions to mitigate them.

Deployment Process — What will it take to be deployed?

We are in the final stretch, but the exercise is not over yet! Now that we are getting a better handle on the actual engineering time, let’s start to think about the process of shipping the work.

What are the review phases that you will encounter? Will the changes have to go through peer reviews? If so, what kind of cycle time can we expect from the reviewers? There may be a review/feedback cycle to each change that is shipped. How long will that take approximately?

Will there be any other reviews outside of peer reviews? Will this have to go through a design review to ensure the final product matches the designs? Will it have to go through any compliance audits such as Privacy, Security, Legal, etc? Try to gather the rough turnaround time for those.

Final Thoughts

Providing time-based estimates is always hard. It involves a slew of factors and varies from person to person. That’s one of the reasons why many teams have transitioned to a practice of assigning points to a task known as “story points”, which is an agile development practice.

Instead of time spent, the team would assign a relative complexity of a task on a point scale. The scale can be anything from Fibonacci sequence to t-shirt sizes. Over time, the team builds a better understanding of how their story points map to difficulty of a task which in turn can be used to inform timelines.

No matter which framework you decide to use, like developing any skill, estimation requires continuous practice, refinement, and learning. You’ll get better and better if you treat it as a skill that can be developed over time.

This post was originally published on the BetterUp Product Blog.

Merging SimpleCov results with parallel Rails specs on Semaphore CI

If you are running Rails specs on parallel machines with KnapsackPro, one challenge you will run into is combining the code coverage results generated by SimpleCov.

This post will show you how to generate a report of the total combined code coverage after all the tests have executed. Here’s what the pipeline diagram looks like at a high level with unrelated sections blurred out:

Pipeline Diagram

Configure

SimpleCov

First lets look at the config file used for SimpleCov below. Note no minimum_coverage configuration for failing the build. This is because each node most likely will not meet the minimum coverage threshold on its own so it could lead to the build failing erroneously.

Also note, before_queue hook for KnapsackPro. This is the important piece, it will set a command name based on the CI node index so that the results are recorded against it.

.simplecov:

  SimpleCov.start do
    add_filter %r{^/config/}
    add_filter %r{^/db/}
    add_filter %r{^/spec/}

    add_group 'Admin', 'app/admin'
    add_group 'Controllers', 'app/controllers'
    add_group 'Helpers', 'app/helpers'
    add_group 'Jobs', 'app/jobs'
    add_group 'Libraries', 'lib/'
    add_group 'Mailers', 'app/mailers'
    add_group 'Models', 'app/models'
    add_group 'Policies', 'app/policies'
    add_group 'Serializers', 'app/serializers'
  end
  Rails.application.eager_load!

  KnapsackPro::Hooks::Queue.before_queue do
    SimpleCov.command_name("rspec_ci_node_#{KnapsackPro::Config::Env.ci_node_index}")
  end

So now when SimpleCov a creates a .resutset.json it will have a specific key depending on which CI node it was run in like the example below. This will be useful down the line when it comes to combining the results.

{
  "rspec_ci_node_0": {
    "coverage": { ... }
  }
}

{
  "rspec_ci_node_1": {
    "coverage": { ... }
  }
}
...

Semaphore CI

Below is the relevant portions of the Semaphore CI configuration. It runs the Rails tests and then uploads the coverage results as a Semaphore workflow artifact. After all the parallel tests have completed, it will run a job to collate the coverage results from all the machines.

.semaphore/semaphore.yml:

- name: "Rails Tests"
    task:
      jobs:
        - name: Rails
          parallelism: 10
          commands:
            - ./.semaphore/helpers/rails_tests.sh
      epilogue:
        always:
          commands:
            - ./.semaphore/helpers/upload_test_artifacts.sh
      secrets:
        - name: docker-hub
        - name: knapsack-pro-rails

- name: "Code Coverage"
  dependencies: ["Rails Tests"]
  task:
    env_vars:
      - name: SEMAPHORE_RAILS_JOB_COUNT
        value: "10"
    jobs:
      - name: Collate Results
        commands:
          - ./.semaphore/helpers/calc_code_coverage.sh
    secrets:
      - name: docker-hub

Below is the bash file which executes the Rails tests on each parallel machine. It sets up the Rails environment and then runs KnapsackPro in queue mode.

.semaphore/helpers/rails_tests.sh:

#!/bin/bash
set -euo pipefail

docker-compose -f docker-compose.semaphore.yml --no-ansi run \
  -e KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES=true \
  -e KNAPSACK_PRO_TEST_FILE_PATTERN="spec/**/*_spec.rb" \
  ci bash -c "bin/rake ci:setup db:create db:structure:load knapsack_pro:queue:rspec['--no-color --format progress --format RspecJunitFormatter --out tmp/rspec-junit/rspec.xml']"

This is the bash file which is responsible for uploading the SimpleCov results from each machine. It compresses the coverage directory and uploads it to Semaphore.

.semaphore/helpers/upload_test_artifacts.sh:

if [ -d "tmp/rspec-junit" ]
then
  echo "Pushing rspec junit results"
  artifact push job tmp/rspec-junit --destination semaphore/test-results/
fi

if [ -d "coverage" ]
then
  echo "Pushing simplecov results"
  tar czf coverage_${SEMAPHORE_JOB_INDEX}.tgz -C coverage .
  artifact push workflow coverage_${SEMAPHORE_JOB_INDEX}.tgz
fi

Lastly, this is the bash file for collating all the results from Semaphore. It will download the coverage artifacts from each parallel machine and run a rake task which will collate them and then upload the results into a combined total_coverage.tgz file as shown below:

.semaphore/helpers/calc_code_coverage.sh:

#!/bin/bash
set -euo pipefail

for i in $(eval echo "{1..$SEMAPHORE_RAILS_JOB_COUNT}"); do
  artifact pull workflow coverage_$i.tgz;
  mkdir coverage_$i
  tar -xzf coverage_$i.tgz -C coverage_$i
done

docker-compose -f docker-compose.semaphore.yml --no-ansi run ci bash -c "bin/rake coverage:report"
tar czf total_coverage.tgz -C coverage .
artifact push workflow total_coverage.tgz

This coverage:report rake task will simply call SimpleCov.collate which will go through the coverage results in each folder and combine them into a single .resutset.json shown below

lib/task/coverage_report.rake:

namespace :coverage do
  desc 'Collates all result sets generated by the different test runners'
  task report: :environment do
    require 'simplecov'

    SimpleCov.collate Dir['coverage_*/.resultset.json']
  end
end

.resutset.json

{
  "rspec_ci_node_0, rspec_ci_node_1, rspec_ci_node_2, rspec_ci_node_3, rspec_ci_node_4, rspec_ci_node_5, rspec_ci_node_6, rspec_ci_node_7, rspec_ci_node_8, rspec_ci_node_9": {
    "coverage": { ... }
}

Summary

Finally here’s what your Semaphore workflow artifacts will look like. It will have a compressed coverage file generated on each machine and a total coverage file that we created at the very end:

Workflow Artifact

This approach can also be easily ported over to other CI providers by simply changing the artifact push and artifact pull commands to S3 or another CI specific artifact upload command.

I hope this article was useful to you. Let me know if you have any questions or feedback.