After YouTube removes your channel - will the content still be there after a successful appeal?

This was the question I was worrying about after I was hit by the ML-assisted deletion of one of my channels.

Good news.  YES. The content is still out there. Even after 2 weeks.

The reason I was anxious about removed content

I got an e-mail from YouTube with the title “We have removed your channel from YouTube” and I was not able to access YouTube Studio and that channel anymore. After a bit of searching around, I found out I was not the only one. This was a wrongful and automated take-down that many people have also suffered.
The letter stated the following.

We have permanently removed your channel from YouTube. Going forward, you won’t be able to access, possess, or create any other YouTube channels.

The wording in the letter was a bit confusing for me at that moment. I think I got so carried away by the word “permanent” that I dismissed everything else. They don’t say it’s all gone, but I was in a state of mild creator’s shock because of the chosen word by the YouTube copy team.

Appeal results

Thankfully I learned it was not the case.

I appealed and within a day I got my channel back.
And to my surprise- all the content was there. Channel history and channel subs also. Everything was back to normal like it was never deleted.

Your experience?

You are here probably because you wondered the same thing. Do you agree that Google’s letter is a bit too harsh or is it just me?

 

 

How to use locally installed gatsby-cli instead of global

How to use gatsby-cli from local node_modules.

Usually, Gatsby client is installed globally (note the -g parameter).

npm install -g gatsby-cli

But how to use a different version if needed for one project? For example version “5.7.0”.

If it’s not yet installed, install it without -g parameter, so it would not interfere with the globally installed version. It will be installed locally in node_modules.

npm install gatsby-cli@5.7.0

It will install Gatsby client into ./node_modules/gatsby-cli/cli.js . To use it we just have to run it with node.

To test it out we can check the version number of local and global gatsby-cli.

Local:

node ./node_modules/gatsby-cli/cli.js --version

Global:

gatsby --version

They should produce different results if they are at different versions.

 

 

 

 

How I prepare to step into the SaaS game as a solo founder

As a young man in the software industry- I remember the times when I was dreaming of having my own start-up or SaaS product. Something that scales and is not fully dependent on the hours I put in. Something semi-passive to work on. Even if it turns out to be a free product for fun and exposure instead.

But then life happened. A decade has passed. I know. Convenient self-justification when it’s actually a prioritization and mindset question.

It’s easy to forget to do recreational programming besides everyday work. Especially if there is no plan and the comfort zone takes over. Different ideas have been in my head all the time but never done something much actionable with them. More like dreams. And without a plan, they will remain so.

I am sure all this sounds similar to many people.


Recently I have started taking more seriously a plan to build something as a solo founder or as a team with my friend, who is an excellent designer. We work together anyway in our digital agency, so it would be a logical team-up. What is the reason behind this plan? Not really sure. Maybe the start of middle age is pushing on me or just seeking fun in coding. Who knows.

Here are my steps on how I plan to start working on something again.

Focus on one thing at a time

I am terrible at focusing on one thing for too long if it’s not actual client work. Shiny object syndrome is strong with me. I have to accept my weakness. My greatest hope is to build something small enough that I can just power through with enough momentum.

This means closing most doors and ideas and only working with the one most likely to be finished. Easy to say, hard to live by it.

Work as I am the client for myself

On the contrary, I handle work for clients that go through many years. I can totally focus for a long time period. The difference is that for a client, the work is scoped, has a fixed outcome, is split into steps, and has a timeline with a budget.

I need to do the same for my own projects. I tend to loosely define my own objectives. Like it’s not a real work. This needs to be changed. I have to be accountable for my plans. Most importantly I need to set a “time budget” and a actionable steps for myself.

I must let my inner perfectionist go

Of course, in areas where it’s not reasonable or relevant at the beginning. When it comes to my own code- I tend to over-engineer, and think too big or too much into the future, before even starting to code.

The perfectionist inside me must be silenced because it kills the momentum.

Working on my own project means there is less accountability involved and quality needed for code and processes behind the scenes. I should literally be able to script something together without feeling guilty about how bad it is. I have to be able to let it go. Marketing, product validation, and design are the most important aspect instead anyway. Maybe even no-code is way to go.

Marketing is my friend

This is definitely one of the hardest things I have to work on. I am the one who does not like to toot one’s own horn. Ironically this is pretty vital in business. This writing here is likely also a unconsciously part of self-marketing. For me- public writing is scary already and this is a very big step forward.

Remembering fun old times

I think one of the components of success should be fun. There was a
time when programming was done for fun- not work. I built many small games and tools as a youth. Just for fun. Before venturing into real problems, a small weekend project can boost confidence and morale well. Should start with that. I would call it recreational programming with a more focused purpose.

Final words

Now that I have written it down publicly, I hope this keeps me on my track. This article is very self-centred, but I hope it relates to someone. Even if nobody reads it, writing it down holds a therapeutic effect fore me already.

Drop me a comment if this hits home for you too and thank you for reading.

 

Apify Web Scraper- overcoming the memory limit exceeded error on the free tier

This post is about Apify generic Web Scraper actor while on the free tier.

I needed a quick tool to automate one of my tasks. I quickly whipped up multiple web scraper tasks and tested them individually on the Apify platform. But when it was time to integrate it into my backend- I stumbled on the following problem.

Cannot run actor (By launching this job you will exceed the memory limit of 4096MB for all your actor runs and builds (currently used: 4096MB, requested: 4096MB). Please upgrade to a paid plan to increase your actor memory limit.)

I realised that when running my actors (saved tasks) concurrently, it breaks the maximum memory of 4Gb for my free tier. This problem did not occur while testing since I tested them individually.

Queue problem

I wanted to start all the jobs at once and not deal with managing the queue myself in my backend. I could put jobs into CRON to run at different times, but I would head into the same memory problem when the previous instance run is not yet finished. Theoretically, it’s possible to utilise webhook responses from Apify for that.

But it was easier to put all URLs into startUrls parameter instead. This way memory limit is not exceeded since the same instance would go through all the URLs supplied, one by one, managing queue itself.

One problem down.

Scraping function rewrite

Before I wrote actor for each website I needed to scrape. Now I had to accumulate many scraping strategies into one pageFunction. Not an ideal solution, but serves our objective here.

But how to understand which strategy to use for each URL?

One way is to detect via URL itself from the context given as an input parameter to the pageFunction. Url of the loaded page is stored in context.request.loadedUrl. I went with userParmas option instead which can be found in context.request.userData. I just needed more data to be passed in which would not be possible otherwise.

Here is a snippet of my input JSON for 2 different strategies. You can set userData also in Apify UI when modifying actor/saved task input.

Here is the snippet of my page function to illustrate the strategies in use.

  "startUrls": [
    {
        "url": "[my secret url to scrape]",
        "method": "GET",
        "userData": {
            "useStrategy": "strategy1",
            "myOtherData": ""
        }
    },
    {
        "url": "[my otgher url to scrape]",
        "method": "GET",
        "userData": {
            "useStrategy": "strategy2",
            "myOtherData": ""
        }
    }
  ]
async function pageFunction( context ) {

  function processStrategy1( loadedUrl ){
      // Do your stuff 1
  }

  function processStrategy2( loadedUrl ){
    // Do your stuff 2
  }
  
  switch ( context.request.userData.useStrategy ) {
    case 'strategy1':
      await processStrategy1( context.request.loadedUrl );
      break;
    case 'strategy2':
      await processStrategy2( context.request.loadedUrl );
      break;
    default:
      context.log.info('undefined strategy' );
  }
  
  return listings;
}

Final words

I hope this helps someone out with a similar problem. Let me know if there is something unclear or if you have a better idea.