<![CDATA[IORoot]]>, projects and ideas.]]>https://ioroot.com/https://ioroot.com/favicon.pngIORoothttps://ioroot.com/Ghost 3.40Thu, 24 Dec 2020 09:47:55 GMT60<![CDATA[Wordpress, Google OAuth and AJAX]]>https://ioroot.com/wordpress-oauth-and-ajax/5fe394773b12550001bc7452Tue, 15 Sep 2020 10:15:52 GMT

The absolute simplest method of using the Google OAuth that I could figure out. This is a bare-bones, proof-of-concept, just get something working type of deal. Don't use it in production for the love of all that's holy.

This is replicated over on my github page here: https://github.com/IORoot/wp-plugin__oauth-demo

This is a downloadable plugin that you can use to study the process and a detailed explanation of what's going on below:

Wordpress & Google OAuth

This plugin will create a simple shortcode for a button that will open an OAUTH window to request permissions to use YouTube of the user. It utilises the Google API client library and services composer packages.

This demo is about a simple as I could make it. However, it's still a little convoluted in my opinion. I'm sure there are better ways of doing it.

I've taken lots of concepts and ideas from https://github.com/ohfuchs/acf-oauth so if you want a full ACF Oauth package, then this is a great one to use.

However, in my use-case, I wanted to use the google api client library https://github.com/googleapis/google-api-php-client and it's services. Therefore I had to work out the steps of going about doing all this myself.

Installation

You will need to do the following steps:

  1. Clone the repo into your wp-plugins directory.
  2. Activate the plugin.
  3. Create a new google API project with OAuth 2.0 credentials in the google API Console. https://console.developers.google.com/
  4. Add the YouTube Data API v3 API into the project.
  5. You may have to set up consent pages and usage agreements.
  6. You must add https://MYDOMAIN.com/wp-admin/admin-ajax.php as an Authorized redirect URI.
  7. Download the JSON credentials into the root of the plugin folder and call the file client_secret.json
  8. Run a composer install in the plugin folder to install all dependencies (google-api-php-client and google-api-php-client-services).
  9. Run a composer dumpautoload to autoload all of the classes.
  10. Use the shortcode [andyp_oauth] on a page to render the OAUTH button.

How it works

The OAuth workflow seems to be quite a tricky and complex workflow to follow, but once you break it down into it's component parts, it's much more manageable to understand. Here are the parts:

Step 1 - Creating an Application

Telling google you have an application and you want to give it permissions to use a specific API

You can use the Google API Console (https://console.developers.google.com/) service to create a new project that tells google you are creating a new web application. Once you go to the website you'll want to do the following things:

New Project. A project is all the settings for this application you are creating. You can create multiple projects for different purposes. Each project has quotas on how much it can use each API. Give the project a name and create it.

Consent Screen. Here you can select the different parts of the project that the user will see when authorising your application.

Library. Under the library sidemenu option you can select the specific APIs you want to use. In this demo I'm using theYouTube Data API v3 Select that and enable it.

Credentials. Now you have to setup a way to use this new project. There are three methods available, each with a different use-case. API Keys, OAuth 2.0 Client IDs, and Service Accounts. Click the button at the top of the page + CREATE CREDENTIALS and select "OAuth client ID".

Application Type. This is a "web application". This dictates the way the OAuth process works.

Name. Give your OAuth client ID an appropriate name.

Authoized redirect URIs. These MUST be exactly right. slashes on the end make a difference, as well as the protocol of http or https.

Download JSON. Once all of the details are filled in, download the JSON file with all of the credentials to the root of the demo plugin folder and call the file client_secret.json. It must be called this because a constant called DEMO_APPLICATION_CREDENTIALS looks for this file.

Now you have setup a way to communicate with google. They now know you have an application that needs access to different people's YouTube accounts depending on who authorises it.

They also know a user will be using the OAuth 2.0 workflow to tell them to allow your application to have permission to use their YouTube account.

Lastly, they know that once the user has completed granting access to their account for your application, google will redirect them back to https://yourdomain.com/wp-admin/admin-ajax.php

Step 2 - Composer

Composer is a package manager that automatically installs any php packages you want. In our case, we want to install the Google API client and the YouTube service that comes with it. The installation method is described on their github page here: https://github.com/googleapis/google-api-php-client

My composer.json file tells composer what to install. So, by running composer install you'll install everything you need.

Step 3 - Shortcodes

/src/shortcode/button.php

This uses the wordpress add_shortcode function to declare the word andyp_oauth as the name of the shortcode and a function to run.

This function then does two things:

It renders a <button> with a specific ID that will be picked up by our javascript later.

id="andyp__youtube-oauth--button"

It runs our demo_youtube class and returns any result in JSON. On initial installation, this will be nothing because we haven't authenticated yet.

Add the shortcode [andyp_oauth] onto any page and it'll render the button and any result as a JSON object.

Step 4 - Enqueue Javascript

/src/js/enqueue_js.php

This is where we start getting into the nitty-gritty. Before we start, you'll notice there are comments all over and some code commented out, this is because this demo app is meant for the frontend. However, the commented out bits allow you to use the code in the backend too. For instance, the add_action at the top of the page has a commented out second declaration for admin_enqueue_scripts for backend usage too.

This file will run the wp_enqueue_scripts wordpress action on the function we define in this file. This then does the following things:

Adds jQuery and our custom demo_oauth.js script.

Create a new Google_Client object from the google-api-client-php package.

/*     
* Generate AUTH URL with Google Client Library.
*/
$client = new Google_Client();
$client->setAuthConfig(DEMO_APPLICATION_CREDENTIALS);
$client->addScope( Google_Service_YouTube::YOUTUBE_FORCE_SSL );
$client->setPrompt('consent');
$client->setAccessType('offline');

This will build up a new Google Client object.

set the Authentication config (using our client_secrets.json file - assigned to that constant).

Add a YouTube Scopes . Think of a scope as a specific permission level - YOUTUBE_FORCE_SSL is a full access permission level.

Finally, Set a consent screen to show and that we want offline access.

The next part is to utilise the state parameter that we can send to the google OAuth server, but it's essentially not used by them, it's for us on the return back once the user has been authentiated.

What we're going to do is use wordpress's AJAX functionality to read any returned values and do something with them. The way we have to set this functionality up is by redirecting back to the admin-ajax.php file (remember we specified that in the Google console as the return URI).

However, this file expects at least one parameter called 'action' to indicate which function you want it to run.

The head-scratcher problem is that the google API does not have an 'action' parameter and won't allow any extra ones to be added. This is where the state parameter comes in. We're going to send a json_encoded array of 'action' = 'demo_oauth_callback' within the state parameter and setup (later - see below) a catcher to json_decode the state parameter and append it's contents (this key-value pair) as an extra parameter BEFORE it gets sent to the admin-ajax.php file. Cool, huh?

Alright, well, to setup this state parameter, we do this:

/**
* The "action" parameter tells the admin-ajax.php system 
* which Action to run.
* In this case, the action is "demo_oauth" which is 
* definedas an AJAX endpoint in the 
* /actions/oauth_callback.php file.
*/
$demo_state_args = array(
	'action' => 'demo_oauth_callback'
);

$state = base64_encode( json_encode( $demo_state_args ) );

$client->setState($state);

The google client library allows us to generate an authentication URL based off all the settings we specified above. To do this is a one-liner:

$auth_url = $client->createAuthUrl();

This will return with a long URL that we can visit to open up the start of the OAuth process. However, we want to send it to our Javascript to open up a new tab window instead.

The last part is the take the generated Authentication URL and make it available to our Javascript on the front-end. To do this, we can utilise the wordpress wp_localize_script function to send any values to the frontend. We want two values:

  • The admin-ajax.php file url.
  • The authentication url we just generated.

The wp_localize_script needs to know which javascript file to tie the values to and the name of the data object to nest these values under.

/**
* Make these values accessible in the Javascript file.
* 
* In JavaScript, these object properties are accessed as 
* ajax_object.ajax_url
* ajax_object.auth_url
*/
wp_localize_script( 'demo-oauth-script', 'ajax_object', 
	[
    'ajax_url' => admin_url( 'admin-ajax.php' ), 
		'auth_url' => $auth_url
	] 
);

This then will link to the demo-oauth-script which we enqueued at the top of the function. And the object with all the data is called ajax_object.

Ok, so we've now loaded our javascript into our footer of the page, our authentication URL has been generated and we've made that available to the javascript.

Step 5 - Javascript

/src/js/demo_oauth.js

When you open this file up you can see it's pretty damn basic.

(function($){
/**
* The ajax_object.auth_url object is passed in from the 
* wp_localize_script function in enqueue_js.php file. 
*/
$('#andyp__youtube-oauth--button').on( 'click', function(){
		var win = window.open( 
    		ajax_object.auth_url, 
    		"_blank", 
    		"width=600,height=600" 
  	);
});

})(jQuery);

All this does is the following:

  1. Make the jQuery available as $
  2. Search for our button ID '#andyp__youtube-oauth--button'
  3. Attach a click event onto it which will run a function.
  4. The function will create a new blank window that point to the authentication URL we created in the enqueuing process above.
  5. Success!

Step 6 - OAuth

At this point, the user will click the button and open the new window with the authentication URL as the target. They will be presented with the OAuth steps Β from Google to select a user / account from YouTube and to allow access to the project.

Note - You WILL get a warning saying "This App isn't Verified". You'll need to click on the 'Advanced' link and then the "Go to yourdomain.com (unsafe)" to proceed.

This will disappear once your app has gone through the google verification process. However, for this demo purposes, there's no need.

Step 7 - The Callback

/src/callback/ajax_callback.php

Once the user has gone through the entire process of the OAuth steps, the google OAuth server will redirect the user to our redirect URI of https://yourdomain.com/wp-admin/admin-ajax.php with all of the bits we need to run an authenticated API call.

So now we need to setup that AJAX function to listen for that returning response from google. To do that we use the wordpress wp_ajax_ actions.

Wordpress allows us to set this up by specifying what 'action' parameter to listen for (remember we put the action = demo_oauth_callback into the state of the request - see Step 4 above ) and also what function to run when it sees that specific 'action' parameter.

So we need to set up an AJAX listener for the action demo_oauth_callback and then run a function when it sees it.

add_action( 'wp_ajax_demo_oauth_callback', 'demo_oauth_callback', 8);

This line will do exactly that.

Note that if you want to specifically setup and AJAX listener, or endpoint, or whatever you want to call it, you have to prefix the word with wp_ajax_ on the action.

Now, when it sees an action=demo_oauth_callback in a request to the page admin-ajax.php, the function (called the same thing - standard practice in wordpress) 'demo_oauth_callback' will run.

You'll notice that there is a very similar action called wp_ajax_nopriv_demo_oauth_callback too. This is for frontend no-logged-in users to use the button too. If you're only using this on the backend, no need for that action.

The function that runs, demo_oauth_callback really only does two things:

Sets a transient of 300 seconds (this is a database entry with a timelimit on when it's removed).

set_transient( 'DEMO_OAUTH_CODE', $_REQUEST['code'], 300 );

The google OAuth server returns all the data we need in the global $_REQUEST object. The ['code'] is the access code we can use to run API Calls! Whoo!

The issue is that it's very short-lived and we don't want to keep re-authenticating every 5 minutes. But we'll deal with that later. For now, lets just store the value in the database.

Output some text and then run the wp_die() function. You need to do this to properly run the request.

There we have it, we have the Access code in our database to use!

Step 8 - Decode state.

/src/callback/decode_state.php

Not so fast... remember we have that little issue of the action parameter being inside the state parameter. Well we need to extract that value and add it as an action parameter itself.

In example, when the OAuth server sends the user back to admin-ajax.php, the request will look a little like this:

https://yourdomain.com/wp-admin/admin-ajax.php
?state=eyJhY3Rpb24iOiJkZW1vX29hdXRoXANhbGxiYWNrIn0%3D
&code=4/4AElLX-I-BWrqG1t-gVJLi03lYYSPXysL70-w4yMI2Kt2if8CIAT2wL3PSPGTatNJ6B_tyQH5WczRr5At6firAI
&scope=https://www.googleapis.com/auth/youtube.force-ssl

Notice there's no &action= there. That's because it's inside the part:

?state=eyJhY3Rpb24iOiJkZW1vX29hdXRoX3NhbGxiYWNrIn0%3D

So we need to grab that string, base64_decode and json_decode it, then add any of the key-value pairs as real request parameters.

I'll say upfront that this file is basically stolen from https://github.com/ohfuchs/acf-oauth , so go give him a star and have a look at his repo for a deeper analysis.

We're going to add a new action on init that will run wil a priority of 9 that will do some checks and intercept the $_REQUEST['state'] parameter.

add_action( 'init',	'_decode_repsonse_state', 9 );

And the function _decode_repsonse_state does the following:

  1. Checks for admin permission
  2. Checks for looged in user
  3. Checks to see if AJAX is running
  4. Checks if there is a $_REQUEST['state'] parameter
  5. If all passes, base64_decode() the value
  6. Run a json_decode() on the value
  7. Merge the array key and value onto the existing $_REQUEST array.

Now we have an action=demo_oauth_callback in the $_REQUEST array that the admin-ajax.php file will pick up and run.

Step 9 - Refresh Tokens

/src/youtube.php

The access token is nice a safe within the database for the next 5 minutes, but as mentioned before, we can't keep authenticating like that. This is where we now generate refresh tokens that won't run out and allow us to generate more access tokens without going through the whole process again.

The demo_youtube class has a run() method that kicks everything off and initially do these steps:

Get any existing tokens from the transient DB.

Create a new google client from the google-api-client-php package.

Check to see if we have a refresh_token which we don't.

Check to see if the user actually ran the OAuth flow.

Create a new refresh token by running the get_auth_token() method.

/**
* get_auth_token 
* 
* Not autenticated yet, so do so and set refresh token.
* Refresh token set for 1 week.
* 
* @return void
*/
public function get_auth_token()
{

	$this->client->authenticate($this->auth_token);

	$this->refresh_token = $this->client->getRefreshToken();

	set_transient( 
		'DEMO_OAUTH_REFRESH_TOKEN', 
		$this->refresh_token, 
		WEEK_IN_SECONDS 
  );

}

This will do the following three things:

  • Use the auth_token we got back from google to authenticate the google client. This is done simply by running it's own authenticate method with the token as a parameter.
  • Once the client is authenticated, grab a new refresh_token by using the getRefreshToken() method.
  • Save the refresh_token into another transient within the database for 1 week. (sidenote - I know this is insecure, it's for demo purposes only)

Now we have the refresh_token we can continue onto running the youtube request.

Step 10 - YouTube Request

/src/youtube.php

Now we have a refresh_token and an auth_token (for the next 5 minutes at least), we can run the run_youtube_request() method.

This uses the second composer installed package, the google-api-php-client-services that includes the YouTube service.

https://github.com/googleapis/google-api-php-client-services

Now we can create a new instance of the google YouTube service by running the line:

$this->service = new \Google_Service_YouTube($this->client);

You can see it requires an instance of an authenticated client to work.

Once this is done, we can now call the YouTube API quite easily. Our service object can do a whole host of API requests. See the YouTube API documentation for more information:

https://developers.google.com/youtube/v3/docs

Our simple call will just list the channels the authenticated user has access to. So we can run the listChannels() method on the service object to do this. However, we need to pass any query parameters too, so just provide an array with those in it.

$queryParams = [
  'mine' => true
];
$this->results = $this->service->channels->listChannels( 'snippet,contentDetails,statistics', $queryParams );

This will set the $this->results parameter to the result of the query.

Step 11 - Using a Refresh Token

/src/youtube.php

Lets return back to the run() method of the demo_youtube class.

The first check is whether there is a refresh_token or not. The second time around, there will be, so now we can use that token to authenticate the client instead. Thats done in the use_refresh_token() method.

Simply pass the token to the client's refreshToken method and it'll reauthenticate and allow you to continue using the API calls.

public function use_refresh_token()
{
	$refresh_token = get_transient('DEMO_OAUTH_REFRESH_TOKEN');
	$this->client->refreshToken($refresh_token);
}

Step 12 - Get Results

/src/shortcode/button.php

If we go back to the button code, you can see that it uses the run() method to do all the bits to authenticate and run the YouTube API call and the store it in the $this->results parameter.

It the runs the $this->get_results() method to return those values, does a json_encode (to make it a little more readable) and echos it out to the screen.

Summary

I'm actually using this as an ACF button (within a message field type) and it works quite well. I've stripped away as much as I could to make it easier to understand, but the core is there.

Closing notes:

  1. Security. Storing the tokens in cleartext in the DB as transients has to be insecure and I'm sure there must be better methods.
  2. Make sure that the .gitignore file is pointing at your client_secret.json file. You don't want to commit that into your repository or github.
  3. There's probably a better way with REST APIs instead of AJAX methods. This might be another project.

Pull requests welcome. Good luck and let the force be with you.

]]>
<![CDATA[Wordpress AJAX]]>https://ioroot.com/wordpress-ajax/5fe394773b12550001bc7451Fri, 11 Sep 2020 15:48:22 GMT

In the slow process of trying to work out how to implement google OAUTH, I need to recap on AJAX.

These are simple diagrams of the descriptions made on the official wordpress pages here:

https://codex.wordpress.org/AJAX_in_Plugins

1. Using inline JS within PHP.

Wordpress AJAX

1.1 Declare the Javascript in PHP

Step one is to embed the javascript that will run your AJAX call into the page that will run it. This is done with a wordpress action.

add_action( 'admin_footer', 'my_action_javascript' );

function my_action_javascript() { ?>
	<script type="text/javascript" >
    jQuery(document).ready(function($) {

  	var data = {
    	'action': 'my_action',
    	'whatever': 1234
  	};
  
  	// jQuery.post( url [,data ] [,success ] [,dataType ] )
  	jQuery.post(ajaxurl, data, function(response) {
    	alert('Got this from the server: ' + response);
  	});

	});
	</script> <?php
}

The jQuery.post will call the wordpress admin-ajax.php file (shortened to the global ajaxurl ).

This will pass the 'action': 'my_action', in the data. This is used to reference the correct backend function and will look for a function called wp_ajax_my_action to run.

1.2 Declare the AJAX callback.

We need to create the function to run when the AJAX call is made. This is done through another add_action.

add_action( 'wp_ajax_my_action', 'my_action' );

function my_action() {
	$whatever = intval( $_POST['whatever'] );
	$whatever += 10;
  echo $whatever;
	wp_die(); // this is required to return a proper response
}

The flow.

  1. JS added to footer.
  2. When page loaded, JS will run.
  3. JS will make AJAX call to ajaxurl with action my_action
  4. Function my_function is tethered to the wp_ajax_my_action action.
  5. The AJAX call will run my_function and return value.
  6. Response is sent back to footer script.
  7. Alertbox echos response value.

2. Using Javascript Files.

Wordpress AJAX

Using this method we can pass values TO the javascript file first.

2.1 Enqueue Javascript

Create the Javascript file and enqueue it through an action function. (As you would any other JS file).

add_action( 'admin_enqueue_scripts', 'my_enqueue' );

function my_enqueue($hook) {
    
		wp_enqueue_script( 
    	'ajax-script', 
    	plugins_url( '/js/my_query.js', __FILE__ ),
    	array('jquery') 
  	);

    // in JavaScript, object properties are accessed 
  	// as ajax_object.ajax_url, ajax_object.we_value
    $data = [
        'ajax_url' => admin_url( 'admin-ajax.php' ), 
        'we_value' => 1234 
    ];
  
	wp_localize_script( 'ajax-script', 'ajax_object', $data );
}

Secondly, we can use the wp_localize_script to pass values INTO the javascript.

2.2. Write Javascript.

Once we've passed our data to the javascript, we can then use it.

jQuery(document).ready(function($) {
	var data = {
		'action': 'my_action',
		'whatever': ajax_object.we_value      
     // We pass php values differently!
	};
	// We can also pass the url value separately 
  // from ajaxurl for front end AJAX implementations
	jQuery.post(ajax_object.ajax_url, data, function(response)
  {
		alert('Got this from the server: ' + response);
	});
});

The ajax_object.we_value and ajax_object.ajax_url contains the data passed to the javascript file from the PHP.

Flow

  1. Action enqueues javascript file.
  2. Data is passed with wp_localize_script function.
  3. Javascript file is added to footer
  4. Javascript uses localised data to fill it's data array.
  5. The JS data array is sent to the callback url ALSO sent via php.
  6. the callback processes as before and passes response back.
  7. Alert is made on response.
]]>
<![CDATA[🚲 E-Bike Research]]>https://ioroot.com/e-bike-research/5fe394773b12550001bc7450Sat, 20 Jun 2020 09:38:36 GMT

I want to buy an affordable e-bike because the amount of cash I'm saving during the lockdown from not having to pay for travel is pretty good and I'd like to keep it up... Thos who know me, know I like to do research. These are the results:

Overview

MODEL Note Price
Fiido D1 Review 499
Fiido D2 Review 526
Fiido D2s Review 633
etura Too expensive 1199
Brompton Electric H2L Too expensive 2595
Carrera Crosscity Too expensive 999
Raleigh Evo Too expensive 1349
NCM London E-BIKE Too expensive 1089.84
Decathlon B'Twin 500 Review 446.21
Gocycle Too expensive 2899
Xiaomi QICycle Too expensive 999
Windgoo Electric Bike NO, 25km/h 279.99
eelo 1885 pro Too expensive 1104.99
MiRiDER One Too expensive 1300
Convincied ONEBOT S6 Review 579.99
Vektron S10 Review 3600
Hummingbird Too expensive 4495
Emu mini Too expensive 999 / 799
kwikfold xcite Too expensive 1099
Dawes Arc-II Too expensive 1099
Byocycle Tornado Review 669.99
Byocycle City Review 749.99
Vision 20 Same as vision / viking / Atwater / falcon 469.95
Viking Gravity Same as vision / viking / Atwater 549.99
Falcon Surge 20 Same as vision / viking / Atwater 899.99
eplus 20 Same as vision / viking / Atwater, 15mph 429.99
e-go bike Review 869.00
Jannyshop 14" Review 598.89
kudos k16 Review 595.00
kudos secret Review 795.00

Possibles in my price range:

  • Fiido D1, D1 10.4ah, D2, D2s.

  • B'Twin Tilt 500

  • Onebot S6

  • Viking Gravity 2020

  • Kudos k16

Fiido D1

🚲 E-Bike Research
MODEL Speed Power Weight Range Wheels Battery Price
Fiido D1 25km/h 250W 17.5kg 40~55km 14'' 36v 7.8AH Lithium Battery, 2600mah 414.89
Amazon 250W 18kg 50 km to 90km 10.4 Ah (longer lasting) 519.99

Fiido D2

🚲 E-Bike Research
MODEL Speed Power Weight Range Wheels Battery Price
Fiido D2 25 km/h 250W 19kg 40-80 16'' 7.8 Ah 526
  • single speed

Fiido D2s

🚲 E-Bike Research
MODEL Speed Power Weight Range Wheels Battery Price
Fiido D2s 25 km/h 250W 19kg 40-80 16'' 7.8 Ah 36v 280Wh 446.21

Note - The D3 is NOT foldable.

B'Twin Tilt 500

🚲 E-Bike Research
MODEL Speed Power Weight Range Wheels Battery Price
Decathlon B'Twin 500 25 km/h 250W 18.6kg 20-35 20'' 7.8 Ah 749.00
  • 6-speed gears.

  • 35 km in economy mode/25 km in normal mode/20 km in sport mode.

  • 250W brushless motor, 24V battery built into frame, 7.8 Ah electric assistance controlled by pedalling sensors.

  • Battery: lifespan of 350 to 500 charge cycles. Warranty: 2 years.

  • Controller looks shit.

https://www.youtube.com/watch?v=IIX8aHsJv7w

https://ebikechoices.com/btwin-tilt-500-folding-electric-bike-review/

https://www.expertreviews.co.uk/cycling/1409776/btwin-tilt-500-electric-review-best-budget-folding-electric-bike

Bit big for tube maybe?

Onebot S6

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
ONEBOT S6 25km/h, 16mph 250W brushless 18kg 60km 16'' 36V 5.2Ah LG lithium 579.99

https://www.amazon.co.uk/Convincied-Faltrad-Adjustment-Lightweight-Magnesium/dp/B0895P79T8

Viking Gravity 2020 aka Evo Atwater

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
Viking Gravity 20 mph Promovec 250W, 36V rear hub 23kg 22 miles / 35km 20'' 24v 8.8ah - 24V UK Charger 549.99

Kudos k16

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
kudos k16 Bafang 250 watt x 36v hub motor - 220w 15kgs (Β±5%) 30β†’45 miles range 16'' Samsung 10.4Ah lithium battery 595.00

Others :

  • Windgoo
  • Jannyshop
  • Falcon Surge
  • Eplus 20

Windgoo

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
Windgoo Electric Bike 25km/h 350W 12kg 15-25 12'' 6Ha, 36v. Lithium 279.99

Ugly. Nope. Doesn't fold.

JannyShop

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
Jannyshop 14" 35kmph 350w 23kg 30-35 km 16'' 48V/12AH Removable Lithium Battery 598.89

Falcon Surge

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
Falcon Surge 20 250W rear hub motor 20kg 40-45km 20'' 36V 10Ah (360Wh) rear carrier mounted lithium battery 899.99

Eplus 20

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
eplus 20 Same as vision / viking / Atwater, 15mph 250W 23.8kg 24 miles per full charge 20'' 24v/250w, 7Ah 429.99
  • Heavy
  • Argos
  • Good reviews on argos.

ByoCycle Tornado

🚲 E-Bike Research
Model Speed Power Weight Range Wheels Battery Price
Byocycle Tornado 15.5mph 250w Jiabo Brushless Geared Motor 19kg 20 miles 20'' Samsung 36v 7.8AH Lithium Ion 669.99
  • Old
  • Hard to find resources.

https://www.bridgendcyclecentre.com/m1b0s129p3183/BYOCYCLE-Chameleon-Tornado

]]>
<![CDATA[Wordpress Simple Actions & Filters]]>https://ioroot.com/wordpress-simple-actions-filters/5fe394773b12550001bc744fFri, 01 May 2020 09:16:57 GMT

Curse the gods, this simple functionality can seem complicated. I think its a combination of the terminology and word choice being so unhelpful. Do I add , apply or do a filter or action? What is a hook here? Or maybe I need to register it? It's all very vague, and I constantly forget.

Anyway, here's some pretty pictures instead. I've made it as basic as possible, more for remembering which one to use and where.

πŸ§—πŸ»β€β™€οΈ Basic Actions

TL; DR : Plug in new code at a specific point.

Wordpress Simple Actions & Filters
  1. Once you've registered your code, you're now 'hooked' into the action 'do_something_here'
  2. Whenever the code gets to the do_action('do_something_here'), it'll now run your function 'run_action_function'.
  3. No need to return anything.

πŸ” Basic Filters

TL;DR : Pass in a value, change the value, pass new value back.

Wordpress Simple Actions & Filters
  1. Once you've registered your code, you're now 'hooked' into the filter 'do_something_here'.
  2. Whenever the code gets to the add_filter('run_a_filter_on'), it'll now run your function 'run_filter_function'.
  3. Pass the old value $value into the function.
  4. MUST return something (the new value).

Closing.

Filters and actions have more arguments to them and are very versatile.

I suggest you look at:

https://developer.wordpress.org/reference/functions/apply_filters/

https://developer.wordpress.org/reference/functions/add_filter/

πŸ”— Action Links:

https://developer.wordpress.org/reference/functions/do_action/

https://developer.wordpress.org/reference/functions/add_action/

]]>
<![CDATA[πŸ‘¨πŸ»β€πŸŽ¨ SVG effects for Instagram-like photo editing adjustments]]>https://ioroot.com/svg-effects-for-instagram-like-filters/5fe394773b12550001bc744eSun, 12 Apr 2020 17:22:10 GMT

Note, this article is going to try to be as simple as possible. I'm not going into the depths of SVG functions and elements... more just a reference on how to do specific things. It's a cobbled collection of information I've found from other people rather than any critical-thinking on my part. Check out the links for more in-depth info on each topic.

TL;DR Quick Reference is at the bottom of the page.

πŸ‘¨πŸ»β€πŸŽ¨ SVG effects for Instagram-like photo editing adjustments

Just some background info...

There you are, developing away on your latest project and CSS is kicking ass. It's allowing you to do all of those cool blend-modes and effects but you want to combine it with some SVG goodness.

This was what I was trying to achieve on my latest project for my company website LondonParkour.com

I essentially built a WordPress plugin (It's super alpha stage - don't judge me) to build SVGs with a post image as a base. I could then layer effects and vector shapes over the top of the image and do whatever I liked with it. This gave me the flexibility to apply, in bulk, to any group of posts I wanted.

The thinking was to have some generative-art part to it which creates random shapes and what-not, (you can see the results on the londonparkour.com article section), but I'm digressing...

What I also wanted to achieve was Instagram-like adjustments to the images - in SVG. So that's what this article is all about - how to get those effects. I'll link to all of the posts I used to help me figure this all out too, but essentially, this is how to do the basics of image adjustments on bitmap images using SVG filters and shapes.

Main Links of where I found this information are:

Big thanks to all of these people and their efforts. I've just copied their hard work and tweaked it a little to do what I needed, these are the real innovators who figured out how the filters worked.

πŸŽ› The Adjustments

🏁 The beginning

So, we need an image to start with. I'm going to use this one I took of Big Ben.

Our base SVG will be this:

<svg viewBox="0 0 1500 1000">
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000"></image>
</svg>
Base SVG

I'm not going to bother with the xmlns="http://www.w3.org/2000/svg" and the xmlns:xlink="http://www.w3.org/1999/xlink" for now. I want to make this all as easy to read as possible.

Which gives us this:

Alright, let's get our mitts dirty. We need to define (as in, use the <def></def> tags) some filters to use on the image. So let's start with:


🎨 Saturation

This is an easy one to get going with. The magic of saturation is this code:

<feColorMatrix 
	type="saturate" 
    in="SourceGraphic" 
    values="3"
 />
Saturation Filter

Now, to use this code, we need to define a filter using the <filter id="overSaturateThis"></filter> tags with an ID to use. We then tell the image to use this filter. Pretty simple.

The input is the sourceGraphic (the image) and we're just using a 'saturate' on the image. The value can be changed up or down from 1. 1 being no change. 3 is hugely oversaturated.

So the code becomes:

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="overSaturateThis">
            <feColorMatrix 
            	type="saturate" 
                in="SourceGraphic" 
                values="3"
            ></feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#overSaturateThis)"></image>
</svg>

🌞 Brightness

Brightness is slightly more complex, but not much. Essentially, we can fiddle with the Red, Green & Blue elements of the pixels to increase or decrease the brightness.

<feComponentTransfer in="sourceGraphic">
    <feFuncR type="linear" slope="0.5"/>
    <feFuncG type="linear" slope="0.5"/>
    <feFuncB type="linear" slope="0.5"/>
</feComponentTransfer>
Brightness filter

You can see the feFuncR controls the Red, feFuncG the green and feFuncB the blue. The slope allows us to increase or decrease the brightness by giving a number lower or higher than 1.

Let's bring the brightness down by half (0.5) in this example.

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="lowerTheBrightness">
            <feComponentTransfer in="sourceGraphic">
                <feFuncR type="linear" slope="0.5"/>
                <feFuncG type="linear" slope="0.5"/>
                <feFuncB type="linear" slope="0.5"/>
            </feComponentTransfer>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#lowerTheBrightness)"></image>
</svg>

β¬›οΈβ¬œοΈ Contrast

Alright, this one requires a little maths... Nothing difficult though.

Start with the amount of contrast you want to add / remove from the picture... let's say increase contrast to 1.2

Half the value : 0.6

Invert it : -0.6

Add 0.5 onto it : -0.1

And thats the calculated value you're going to use in the intercept part of the filter. The slope is the original contrast value you want to use.

  <feComponentTransfer in="sourceImage">
      <feFuncR type="linear" slope="1.2" intercept="-0.1"/>
      <feFuncG type="linear" slope="1.2" intercept="-0.1"/>
      <feFuncB type="linear" slope="1.2" intercept="-0.1"/>
  </feComponentTransfer>
Contrast Filter

Plug it into the full code to effect the image and you've got this.

<svg viewBox="0 0 1500 1000">
    <defs>
    	<filter id="enhanceTheContrast">
     		<feComponentTransfer in="sourceImage">
                <feFuncR type="linear" slope="1.2" intercept="-0.1"/>
                <feFuncG type="linear" slope="1.2" intercept="-0.1"/>
                <feFuncB type="linear" slope="1.2" intercept="-0.1"/>
           </feComponentTransfer>
   		</filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#enhanceTheContrast)"></image>
</svg>

πŸ”ͺ Sharpening

Yeah! That's right. You can sharpen SVG images too. That's pretty bad-ass if you ask me.

And it's another one liner to boot.

<feConvolveMatrix in="SourceGraphic" order="3" preserveAlpha="true" kernelMatrix="-1 0 0 0 4 0 0 0 -1"/>
Sharpening Filter

Playing with the -1 up and down can increase or decrease the sharpness. Play around with the numbers to see what you can come up with.

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="sharpenTheImage">
            <feConvolveMatrix 
            	in="SourceGraphic" 
                order="3" 
                preserveAlpha="true" 
                kernelMatrix="-1 0 0 0 4 0 0 0 -1"
            />
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#sharpenTheImage)"></image>
</svg>

🟒 Hue Rotation

Lets get funky in this mother... Rotate the hue to change the colours of the image. Very simple effect too.

<feColorMatrix type="hueRotate" in="SourceGraphic" values="45"/>
Hue Rotate Filter

Define the degrees of the hue rotation with the `values`. Gives some funky results too.β €


πŸŸ₯ Overlays & Blending Modes

OK, so we've gotten this far with just some simple filters. Now we need to add a couple more elements to achieve what we want. Again, I'm going to try to keep this as simple as possible.

So, we want Β two things this time.

  1. A solid overlay of a particular colour.
  2. A filter to blend the image and the solid overlay.

Let's tackle the solid overlay. This is called a Flood filter effect.

  <filter id="blendingTest">
    <feFlood 
    	flood-color="#FF0000"
        flood-opacity="0.9"
        result="floodOut"
    ></feFlood>
  </filter>
Flood Filter

The flood has a specific colour #FF0000 in this instance. Notice is also has a result flag too. This is for chaining multiple filter effects together. One filter's result can be another's in or input.

Just to add to the spice, we can also add flood-opacity too, which is a nice little addition to control the amount of the colour.

Secondly, the blending of the feFlood and the sourceGraphic with a blend-mode.

  <filter id="blendingTest">
    <feFlood 
    	flood-color="#FF0000"
        flood-opacity="0.0"
        result="floodOut"
    ></feFlood>
    <feBlend mode="multiply" in="SourceGraphic" in2="floodOut"/>
  </filter>
Blend Mode Added

So now, we have two in's... in and in2. These are now blended together with the 'multiply' blend-mode.

So the full code would be:

<svg viewBox="0 0 1500 1000">
    <defs>
    	  <filter id="blendingTest">
            	<feFlood 
                	flood-color="#FF0000"
                	flood-opacity="0.95"
                	result="floodOut"
            	></feFlood>
            <feBlend mode="multiply" in="SourceGraphic" in2="floodOut"/>
          </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blendingTest)"></image>
</svg>

β›“Chaining Filters Together

You got a little taste in the previous 'overlay' example, but the crux is that you can chain multiple effects together to create one epic filter. To do this, you specify a result keyword that is then used on the in of the next one.

So chaining three of the previous filters together should be easy. Let's put together:

  1. Saturate
  2. HueRotate
  3. Sharpen
<filter id="epicTriple">
    <feColorMatrix 
    	in="SourceGraphic"
        type="saturate" 
        values="3"
        result="saturateOUT"
    />
    
    <feColorMatrix 
        in="saturateOUT"
    	type="hueRotate" 
        values="45"
        result="hueOUT"
    />
    
    <feConvolveMatrix 
    	in="hueOUT" 
        order="3" 
        preserveAlpha="true" 
        kernelMatrix="-1 0 0 0 4 0 0 0 -1"
        result="sharpenOUT"
    />
    
</filter>

As you can see, I've labeled each effect in and result and chained them together.

Image >>> saturateOUT >>> hueOUT >>> sharpenOUT

This gives us the code:

<svg viewBox="0 0 1500 1000">
    <defs>
    	  
        <filter id="epicTriple">
            <feColorMatrix 
                in="SourceGraphic"
                type="saturate" 
                values="3"
                result="saturateOUT"
            />

            <feColorMatrix 
                in="saturateOUT"
                type="hueRotate" 
                values="45"
                result="hueOUT"
            />

            <feConvolveMatrix 
                in="hueOUT" 
                order="3" 
                preserveAlpha="true" 
                kernelMatrix="-1 0 0 0 4 0 0 0 -1"
                result="sharpenOUT"
            />

        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#epicTriple)"></image>
</svg>

πŸ““ Black & White

Theres actually a bunch of ways to do this. I'll give you four here.

First, just desaturate the image completely to 0. Easy-as. We've done this above.

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhite">
            <feColorMatrix 
            	type="saturate" 
                in="SourceGraphic" 
                values="0"
            ></feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhite)"></image>
</svg>
Simple Black & White Filter

Secondly, we can use the more complex colourMatrix to control the RGB values that make the image up. This is using the red channel to desaturate the image.

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhiteRed">
        	<feColorMatrix type="matrix" values="
                .33 0 0 0 0
                .33 0 0 0 0
                .33 0 0 0 0
                 0  0 0 1 0">
        	</feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhiteRed)"></image>
</svg>
Red-Channel B&W

We have the green channel:

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhiteGreen">
        	<feColorMatrix type="matrix" values="
                0 .33 0 0 0
                0 .33 0 0 0
                0 .33 0 0 0
                0   0 0 1 0">
        	</feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhiteGreen)"></image>
</svg>
Green Channel B&W

And the Blue Channel:

<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhiteBlue">
        	<feColorMatrix type="matrix" values="
                0 0 .33 0 0
                0 0 .33 0 0
                0 0 .33 0 0
                0 0   0 1 0">
        	</feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhiteBlue)"></image>
</svg>
Blue Channel B&W

🌁 Instagram Effects

Take a look at Una Kravets fantastic work on CSSGram so you can see what's possible with CSS and how she's recreated famous Instagram filters.

Thanks to her generosity, we can take a look at the underlying SCSS code she used to make those effects on her github repository:

Let's take a look at one of her effects and re-create it now using SVG filters:

The 'aden' effect is right here: https://github.com/una/CSSgram/blob/master/source/scss/aden.scss

una/CSSgram
CSS library for Instagram filters. Contribute to una/CSSgram development by creating an account on GitHub.
πŸ‘¨πŸ»β€πŸŽ¨ SVG effects for Instagram-like photo editing adjustments

In it, you can see she used the following CSS filter:

@mixin aden($filters...) {
  @include filter-base;
  filter: 
     hue-rotate(-20deg) 
     contrast(.9) 
     saturate(.85) 
     brightness(1.2) 
     $filters;

  &::after {
    background: 
    linear-gradient(to right, rgba(66, 10, 14, .2), transparent);
    mix-blend-mode: darken;
  }

  @content;
}

Alright. A hue-rotate, contrast, saturate, brightness and a colour overlay with a 'darken' blend mode. With all our previous knowledge, I'm sure we can recreate that.

I'm also going to add on a sneaky sharpen too - just for my own effect.

Chaining the filters together is now as easy as pie:

<svg viewBox="0 0 1500 1000">
    <defs>
    <filter id="aden" filterUnits="objectBoundingBox">
    
    <!-- HUE ROTATE -20deg -->
    <feColorMatrix 
    	type="hueRotate" 
        in="SourceGraphic" 
        values="-20"
        result="hueRotateOut" 
    />
     
    <!-- DESATURATE 0.85 -->
    <feColorMatrix 
    	type="saturate" 
        in="hueRotateOut"  
        values="0.85"
        result="saturateOut"
    />
     
    <!-- BRIGHTNESS 0.9 -->
    <feComponentTransfer in="saturateOut" result="brightnessOut">
        <feFuncR type="linear" slope="0.9"/>
        <feFuncG type="linear" slope="0.9"/>
        <feFuncB type="linear" slope="0.9"/>
    </feComponentTransfer>

    <!-- CONTRAST 1.2 -->
    <feComponentTransfer in="brightnessOut" result="contrastOut">
        <feFuncR type="linear" slope="1.2" intercept="0.05"/>
        <feFuncG type="linear" slope="1.2" intercept="0.05"/>
        <feFuncB type="linear" slope="1.2" intercept="0.05"/>
    </feComponentTransfer>

    <!-- FLOOD OVERLAY rgba(66, 10, 14, .2) -->
    <feFlood 
    	flood-color="#420A0E" 
        flood-opacity="0.2" 
        out="floodOut" 
    />

    <!-- BLEND FLOOD & CONTRASTOUT -->
    <feBlend 
    	mode="darken" 
        in="contrastOut" 
        in2="floodOut" 
        result="blendOut"
    />

    <!-- SHARPEN -->
    <feConvolveMatrix 
    	in="blendOut"
        order="3" 
        preserveAlpha="true" 
        kernelMatrix="-1 0 0 0 4 0 0 0 -1"
        result="sharpenOut" 
    />

  </filter>
  </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#aden)"></image>
</svg>

Which gives us this wonderful result:

Original:

Just for reference, all of her effects are here: https://github.com/una/CSSgram/tree/master/source/scss


Final Thoughts

During this little project I've really only brushed the surface of what's possible and I'm blown away by the power of SVG image manipulation.

For the LondonParkour project I wrote the SVG data out into a file and then used Imagick and Inkscape to render out to a PNG and JPG. Now I can start generating images however I like with whatever effects and components I like.

You can send me any questions you like over on my twitter account.

twitter.com/lonetraceur


πŸ“– Quick Reference

<feColorMatrix 
	type="saturate" 
    in="SourceGraphic" 
    values="3"
 />
Saturation Filter
<feComponentTransfer in="sourceGraphic">
    <feFuncR type="linear" slope="0.5"/>
    <feFuncG type="linear" slope="0.5"/>
    <feFuncB type="linear" slope="0.5"/>
</feComponentTransfer>
Brightness filter
  <feComponentTransfer in="sourceImage">
      <feFuncR type="linear" slope="1.2" intercept="-0.1"/>
      <feFuncG type="linear" slope="1.2" intercept="-0.1"/>
      <feFuncB type="linear" slope="1.2" intercept="-0.1"/>
  </feComponentTransfer>
Contrast Filter
<feConvolveMatrix in="SourceGraphic" order="3" preserveAlpha="true" kernelMatrix="-1 0 0 0 4 0 0 0 -1"/>
Sharpening Filter
<feColorMatrix type="hueRotate" in="SourceGraphic" values="45"/>
Hue Rotate Filter
  <filter id="blendingTest">
    <feFlood 
    	flood-color="#FF0000"
        flood-opacity="0.0"
        result="floodOut"
    ></feFlood>
    <feBlend mode="multiply" in="SourceGraphic" in2="floodOut"/>
  </filter>
Flood & Blend Mode Filter
<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhite">
            <feColorMatrix 
            	type="saturate" 
                in="SourceGraphic" 
                values="0"
            ></feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhite)"></image>
</svg>
Simple Black & White Filter
<svg viewBox="0 0 1500 1000">
    <defs>
        <filter id="blackAndWhiteGreen">
        	<feColorMatrix type="matrix" values="
                0 .33 0 0 0
                0 .33 0 0 0
                0 .33 0 0 0
                0   0 0 1 0">
        	</feColorMatrix>
        </filter>
    </defs>
    
  <image xlink:href="/content/images/2020/12/bigben_o.jpg" width="1500" height="1000" filter="url(#blackAndWhiteGreen)"></image>
</svg>
Green Channel B&W
]]>
<![CDATA[Got Game? Secrets of Great Incident Management]]>https://ioroot.com/uptime-com/5fe394773b12550001bc744dTue, 18 Feb 2020 07:25:39 GMT

Little shoutout to my friend and colleague John Arundel who wrote a fantastic article for uptime.com and their blog about some of the work my colleagues and I do at ThirtyThree.

Have a read of his blog article here:

https://uptime.com/blog/got-game-secrets-of-great-incident-management

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 6 - Posting]]>Scheduled Post + Hootsuite iPhone App + Instagram = New post!

Well, this is it folks... Once the scheduled date comes around, your hootsuite app will be alerted and you'll be able to post onto Instagram.

Use the official Hootsuite App for this. Authorise your account and away you go.

And that, as

]]>
https://ioroot.com/free-automated-scheduling-of-instagram-videos-part-6-triggering/5fe394773b12550001bc744cSat, 25 Jan 2020 20:44:17 GMT

Scheduled Post + Hootsuite iPhone App + Instagram = New post!

Well, this is it folks... Once the scheduled date comes around, your hootsuite app will be alerted and you'll be able to post onto Instagram.

Use the official Hootsuite App for this. Authorise your account and away you go.

And that, as they say, is a wrap...

It's a fairly long process, but you've now seen the steps for automating much of it and making you life much easier with scheduling posts.

Further steps

Some of the other things I want to figure out and have automated.

  1. Post to YouTube. Should be easy, just another Zapier connection to my YouTube account.
  2. Post to Google My Business. This maybe harder. No Zapier connectivity as yet and Hootsuite doesn't natively support it. I have API access to my account, but may need to write some solution myself.
  3. YouTube back to website LondonParkour.com. So I also want to write a scraper of my YouTube account so I can pull all of these posts back to the website as articles for good SEO juice and links. They're all good content, so I want to take advantage of it all. I also want to add parsing of markdown in the youtube description so when it's scraped onto the website it'll be displayed better and with a little bit of formatting.
  4. Mailchimp and newsletters. I want to have links automatically added into a fully automated monthly newsletter. This can then give me most of the content to post out. I could also use trello for the rest of the content.

Game Over

Any problems with the steps, well, have a go at googling the problem and figuring it out... Time and patience will be a much better teacher than me just giving you the answers. However, if you want to hire and pay me to help you, that's a whole different matter. Email me at andy@londonparkour.com for all enquiries!

Hope you liked the jungle-themed imagery from unsplash for this series too!

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering]]>Trello Card + Trello Label + Hootsuite + Zapier = Triggered Zapier Zap

Lets take stock of where we're currently at... We've filmed a bunch of videos, thrown them through some automated processing for watermarking, trimming, colour grading and resizing. The video has been uploaded to Google Drive and a URL generated to point

]]>
https://ioroot.com/free-automated-scheduling-of-instagram-videos-part-4-labelling/5fe394773b12550001bc744bSat, 25 Jan 2020 20:28:40 GMT

Trello Card + Trello Label + Hootsuite + Zapier = Triggered Zapier Zap

Lets take stock of where we're currently at... We've filmed a bunch of videos, thrown them through some automated processing for watermarking, trimming, colour grading and resizing. The video has been uploaded to Google Drive and a URL generated to point directly at it.

We have then written our video text content and filled in the relevant details within a Trello card with special custom fields.

Once we have filled in all of the card details we will then add the 'SCHEDULED' label to the trello card which will trigger the next steps.

Zapier

That next step is zapier.com.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering
Zapier.com

The beauty of zapier.com is it allows you to connect different services together through their APIs. It's usually limited in terms of functionality of each services' full API, but it has most common things. It's extremely useful to quickly setup more complex workflows.

The 'Zap' (actions we want to take) is as follows:

Within Trello, once a New Label is Added to a Card, we want to take all the details of that particular card and use them to Schedule a Message within a new service called Hootsuite.

Hootsuite

Before we create the Zap, we'll need to create a new free account on Hootsuite. The service is a central dashboard for posting and scheduling media onto different channels. From YouTube, Pinterest and Facebook to Twitter, Google My Business and the main one we're interested in... Instagram.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering
The Hootsuite Planner

You'll need to connect your different social media accounts to Hootsuite and authorise them, but it's a pretty painless process.

Steps to set up Zapier - Trello triggers

Back to zapier and start the process of creating a new Zap.

Pick Trello first as the App to use, authorise zapier to use the app, and start by choosing the trigger 'New Label Added to Card'.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering
Trigger Event

Pick the particular trello account you want to use, you'll probably only have one, so pick that one.

Next, we need to specify the Trello board we're using and the label that will trigger the event. Remember, it's the 'SCHEDULED' one.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering

Once thats done, continue and try to find some real data. You may need to head back over to trello and make sure you've got at least one card with some dummy data in each field you're going to use.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering

This card data will be pulled into to Zapier so we can then map each field to Hootsuite correctly.

Steps to set up Zapier - Hootsuite

Right, now we add Hootsuite. Pick that app, authorise and choose the action we want to take, once this trigger fires... which is 'Schedule Message'.

Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering

Choose the account you've just created on Hootsuite and then move on to 'Customise Message'. (Sorry, I'm British πŸ‡¬πŸ‡§ and we write things with the Queen's English... There's no 'Z' in customise. Sorry, not sorry, American readers. πŸ˜‰)

OK, So here's where we map all the fields together.

  1. The Trello Card custom-field we called 'Video URL' will be mapped to the Media URL in Hootsuite.
  2. The Social Profile is the Instagram one you want to use in Hootsuite.
  3. The Trello card description will now be used at the text of the post.
  4. Finally I have the Trello card scheduled date as the due date on Hootsuite. This means Hootsuite will put it into it's calendar and only post it when it's ready.
Free, Automated Scheduling of Instagram Videos - Part 5 - Triggering

Refresh and Send data!

Hopefully you'll see a new post in Hootsuite for the correct scheduled date!

via GIPHY

Caveats, Gotchas and sidenotes.

  1. The video format you use will depend on which mobile phone you use to post to instagram. Apple will only use .mov files and not .mp4 natively.
  2. There are limits on Zapier.com. As time of writing, this is 200 zap triggers per month.
  3. There are limits on Hootsuite.com. This is 30 scheduled posts per month.
  4. All the connectivity can take a little bit of massaging. Make sure you test everything with dummy data and potentially a test Instagram account.
  5. If you have a business or professional instagram account, Hootsuite can directly post to the account without any further interaction. However, I've chosen to keep my personal account for the londonparkour.com instagram account @london_parkour. This means I use the Hootsuite mobile app and manually get alerted of when to post my scheduled post. I find this better with interaction and allows me to post to multiple accounts at the same time.
]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 4 - Labelling]]>Trello + Custom Fields + Video URL + Description + Date = Trello Card

So this was the tricky part... getting all of the right details and information into a system that could act like the HQ or control-hub of all the posts.

Enter trello.com

My Trello board for social media posting.

This image

]]>
https://ioroot.com/labelling/5fe394773b12550001bc744aSat, 25 Jan 2020 19:39:49 GMT

Trello + Custom Fields + Video URL + Description + Date = Trello Card

So this was the tricky part... getting all of the right details and information into a system that could act like the HQ or control-hub of all the posts.

Enter trello.com

Free, Automated Scheduling of Instagram Videos - Part 4 - Labelling
My Trello board for social media posting.

This image shows how I've laid out the channels and topics. The main one is the 'SCHEDULED = Instagram' list which has a card for each post.

Free, Automated Scheduling of Instagram Videos - Part 4 - Labelling

One of the extra 'power-ups' in Trello is the custom-fields plugin. This allows you to create extra entry fields for specific pieces of information. Perfect for specifying video URL and Playlist colour.

Free, Automated Scheduling of Instagram Videos - Part 4 - Labelling
This is one of the cards for a post.

The most important parts to this card are as follows:

  • Card Title
  • Due Date
  • Label
  • Description
  • Video URL
  • Colour

These entries are all used for uploading into YouTube and Instagram.

The label I've set up is called 'SCHEDULED'. This is the trigger to set everything off and put all of the cogs into motion. Therefore, I write everything I need into the card and the last thing I do is add that label, once I'm happy.

The Video URL is the one we obtained from Google Drive and the URL plugin we used. It's the direct link to the uploaded video.

The colour field is used for my website and some extra custom filtering. It's not needed for your implementation.

The title is ignored in Instagram but used in YouTube.

The description will be the full text of the instagram post, so needs to include relevant hashtags, links, urls and everything else you want to post.

Due date is the date/time the post will be scheduled to be uploaded to social media.

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 3 - Uploading]]>https://ioroot.com/free-automated-scheduling-of-instagram-videos/5fe394773b12550001bc7449Sat, 28 Dec 2019 07:23:00 GMTVideo.mp4 + Google Drive + Direct URL pluginFree, Automated Scheduling of Instagram Videos - Part 3 - Uploading

You have your video from the FFMpeg process in the last article. Now we need to host it online. I have a ton of space on google drive, so it makes sense to use that for me.

I've looked into lots of different cloud hosting and there are actually a load of free ones you might want to look at... For instance, there are free tiers on Dropbox, Google Drive, AWS, Azure, etc...

Once uploaded into a specific folder you can't get the direct URL for the file through the google drive interface, so the easiest way I found to get it is by using the Download Link Generator Google Chrome plugin.

This allows you to right-click on your video file and retrieve the direct address of the video. We'll need this URL for the next step in trello...

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 2 - Converting]]>https://ioroot.com/automated-instagram-videos-converting/5fe394773b12550001bc7448Sat, 28 Dec 2019 07:16:05 GMTVideo.m4v + BASH + FFMpegFree, Automated Scheduling of Instagram Videos - Part 2 - Converting

Not just converting, but automated editing, watermarking, encoding and colour-grading.

The ideal situation would be a fully automated video-editing step that did all the hard work for me. I looked into using aerender (After Effects CLI) to run templates, but it seemed way too complicated and too OTT for the simple types of things I want to do... Enter FFMpeg.

FFMpeg is a super-powered command-line tool for video processing. Many video software packages have it under the hood running in the backend. After an afternoon of playing, it wasn't too tricky to understand, so lets dive in...

You'll have to look up how to install FFMpeg yourself. Google is your friend.

Things I want to do:

  1. Convert to .MP4
  2. Add a watermark for the first 3 seconds of the video.
  3. Cut any crap footage off the beginning
  4. Trim it to a maximum of 60 seconds (for Instagram)
  5. Resize it to smaller dimensions (to save file-sizes for instagram uploads)
  6. If possible, add a LUT to colour-grade the footage (like an instagram filter)

OK, let's go through this bit by bit.

Step 1, conversion to .mp4

ffmpeg -i input_movie.m4v output_movie.mp4

Blam! Easy as pie. Actually, I don't know how to make pie... Easy as falling off a log. FFMpeg detects what encoding you want by the file extensions. The -i means 'input'.

OK, Let's tackle a harder one. The watermark.

ffmpeg
    -i input_file.m4v 
    -framerate 59 
    -loop 1 
    -i watermark_image.png 
    -filter_complex "[1:v] fade=out:st=3:d=1:alpha=1 [ov]; [0:v][ov] overlay=0:0 [v]" 
    -map "[v]" 
    -map 0:a 
    -c:v libx264 
    -c:a copy  
    output_movie.mp4

Whoa, whoa, whoa! WTF! Breathe... and release.

It's not hard at all. Before we break it down, keep in mind the order of the command-line flags (before or after input files) matter with FFMpeg.

The inputs:

ffmpeg
    -i input_file.m4v 
    -framerate 59 
    -loop 1 
    -i watermark_image.png

Take one input as normal -i input_file.m4v (no flags before it).

Take a second input -i watermark_image.png ,which is an image, and loop that as a single frame at 59fps... Why do this? Because as a single image (a la single frame), it needs to be made into a 3-second video to overlay the original video.

-filter_complex "
    [1:v] fade=out:st=3:d=1:alpha=1 [ov]; 
    [0:v][ov] overlay=0:0 [v]
"

filter_complex is a flag to run multiple filters one after another. Each filter is separated by a semicolon. (so i've put them on separate lines for ease of reading.

[1:v] fade=out:st=3:d=1:alpha=1 [ov];

This is filter one. FFMpeg gives each input an ID (starting with 0). Each input has an audio :a and video :v channel too. So 1:v would represent the second input's video channel (That's the watermark, remember). Easy, right?

Then the magic of the filter happens. fade=out:st=3:d=1:alpha=1 says use the fade filter and fade out. start at time position 3 (three seconds in), and run that filter for the duration of 1 second. Also, apply any alpha channels (transparencies). Check the FFMpeg 'filter' section for more details on the flags available for the 'fade' filter.

Lastly, label the output of this filter (to be used in the next filter) with [ov]. you can call it what you like here... [mywatermarkedvideo] is equally OK.

The second filter is easier now we know the formatting of FFMPeg, right? [0:v][ov] overlay=0:0 [v] Β says, use the input of video 0 [0:v] plus the input [ov] (that we just created from the last filter) and use the overlay filter with the parameters 0=0 which means position them on top of each other.

Don't forget the output label... [v] which is now our watermarked video!

-map "[v]" 
-map 0:a 
-c:v libx264 
-c:a copy

The -map flag is used to tell the FFMpeg encoder (the bit that converts the files) which audio channel and video channel to use. We want our new [v] watermarked video, and the original video audio [0:a].

The -c flags are used to tell the encoder how to convert. For the video, use the -c:v libx264 library. For the audio, just copy it across, no need to convert it. -c:a copy

Alrighty, we've now got a watermarked video! Whoop!

Next : Cutting any crap footage off the beginning.

-ss 4

Easy! This flag says seek start. Or in other words, seek to the start position where to start reading the input.

ffmpeg -ss 4 -i input_movie.mov output.mp4

Don't forget, order matters. So because the -ss` flag is before the first input, it'll start reading the input 4 seconds in. Which allows us to trim off any amount we want.

Trim it to a maximum of 60 seconds

We're on a roll... This is now as simple as -t 60 which is time 60 seconds. It's worth noting here that there's another flag -to which will go to the 60-second mark of the input video, not 60 seconds of duration from the starting input (remember we cut 4 seconds off the beginning).\

Resize it to smaller dimensions

-s 1080x608 Come on... it literally couldn't be easier, right? Size is now 1080 wide by 608 high.

Add a LUT to colour-grade the footage

Alright, this is the last part before we put it all together... It requires a little re-jig of the -complex_filter flag, because we want to add this extra one in.

So, the filter now becomes:

-filter_complex "
    [0:v] lut3d=mylut.3dl [outlut]; 
    [1:v] fade=out:st=3:d=1:alpha=1 [ov]; 
    [outlut][ov] overlay=0:0 [v]
"

The first filter is now a lut3d one that takes the filename/location of the LUT you want to use. We apply it to the first input video [0:v] and output the label [outlut].

Lastly, instead of using the original video for the overlay stage of the watermarking, we'll use the colour-graded footage. So the overlay filter needs to change slightly to have that as the input instead.

[outlut][ov] overlay=0:0 [v]

That's it. Use the LUT-applied footage and overlay the watermark on top of it. That now becomes [v] which we can encode.

The entire power-ranger assemble command becomes this:

ffmpeg
    -ss 4
    -i input_movie.m4v
    -framerate 59 
    -loop 1 
    -i watermark.png
    -s 1080x608 
    -filter_complex "
    	[0:v] lut3d=lutfile.3dl [outlut]; 
        [1:v] fade=out:st=3:d=1:alpha=1 [ov]; 
        [outlut][ov] overlay=0:0 [v]"
    -map "[v]" 
    -map 0:a 
    -c:v libx264 
    -c:a copy 
    -t 60
    -shortest output_movie.mp4

If you want to put this into a bash script, you'll need to add some tweaks for the newlines, and I actually add in a -nostdin flag for looping purposes and a -hide_banner to suppress output bits.

There's one last flag we should use: -shortest which tells FFMpeg to keep going until the end of the shortest video, then stop. So if the input video is 45sec long, it'll stop there. So 60sec is the maximum, but not necessarily the output length.

So the full bash script would be:

#!/bin/bash

ffmpeg -hide_banner \
    -ss 4 \
    -i input_movie.m4v \
    -framerate 59 \
    -loop 1 \
    -i watermark.png \
    -s 1080x608 \
    -filter_complex "[0:v] lut3d=lutfile.3dl [outlut]; [1:v] fade=out:st=3:d=1:alpha=1 [ov]; [outlut][ov] overlay=0:0 [v]" \
    -map "[v]" \
    -map 0:a \
    -c:v libx264 \
    -c:a copy \
    -t 60 \
    -nostdin \
    -shortest output_movie.mp4

Go get a cup of coffee... Then we'll add in some BASH bits to take variables on the command line.

Add BASH Variables to run on the command line.

I'm going to start this by substituting the FFMpeg command with the bits I want to bee able to change:

  • input file : $INPUTFILE
  • watermark image $WATERMARK
  • lut file $LUTFILE
  • output file $OUTPUTFILE
#!/bin/bash

ffmpeg -hide_banner \
    -ss 4 \
    -i $INPUTFILE \
    -framerate 59 \
    -loop 1 \
    -i $WATERMARK \
    -s 1080x608 \
    -filter_complex "[0:v] lut3d=$LUTFILE [outlut]; [1:v] fade=out:st=3:d=1:alpha=1 [ov]; [outlut][ov] overlay=0:0 [v]" \
    -map "[v]" \
    -map 0:a \
    -c:v libx264 \
    -c:a copy \
    -t 60 \
    -nostdin \
    -shortest $OUTPUTFILE

If this is to be a command-line tool, we need to check this variables are input because the FFMpeg will fail. So we can add before the command:

# Take Arguments.
if [ "$#" -ne 4 ]; then
  echo "$0 $1 $2 $3 $4"  >&2
  echo "Usage: $0 FILE WATERMARK LUTFILE OUTPUTFILE " >&2
  exit 1
fi

Which reads: "If the number of inputs does NotEqual 4" echo this message and exit.

Otherwise, it does equal 4, so continue.

Now, lets assign names to the variables. Because... good practice.

INPUTFILE=$1
WATERMARK=$2
LUTFILE=$3
OUTPUTFILE=$4

We could do many more checks and bits, but lets leave it there... Create a new file touch convert_video.sh and paste this in:

#!/bin/bash

# Take Arguments.
if [ "$#" -ne 4 ]; then
  echo "$0 $1 $2 $3 $4"  >&2
  echo "Usage: $0 FILE WATERMARK LUTFILE OUTPUTFILE " >&2
  exit 1
fi

INPUTFILE=$1
WATERMARK=$2
LUTFILE=$3
OUTPUTFILE=$4

ffmpeg -hide_banner \
    -ss 4 \
    -i $INPUTFILE \
    -framerate 59 \
    -loop 1 \
    -i $WATERMARK \
    -s 1080x608 \
    -filter_complex "[0:v] lut3d=$LUTFILE [outlut]; [1:v] fade=out:st=3:d=1:alpha=1 [ov]; [outlut][ov] overlay=0:0 [v]" \
    -map "[v]" \
    -map 0:a \
    -c:v libx264 \
    -c:a copy \
    -t 60 \
    -nostdin \
    -shortest $OUTPUTFILE
    
exit 0

Make sure the file is executable: chmod +x convert_video.sh

Then run it with the variables on the command line:

./convert_video.sh in_video.mov watermark.png mylut.3dl out_video.mp4

Voila!

This will convert your video, add the watermark, apply the LUT file, trim it, cut it and resize it.

There's so much more we could do (and I have) to add to this process... in the github repository you'll see that I've added the ability to do the following things:

  • Loop over multiple files, one after another.
  • Control what happens to each file (cut point, length, LUT file, etc... ) with a CSV file.
  • Generate that CSV file.
  • Control all defaults on the whole process with a config.conf file. Β (Default LUT, directories to look in, remove exisitng files, etc...)

Bug fix.

There was one bug that proved a pain in the ass... Trimming the video to the correct length. There's a load of stuff to do with keyframes and not cutting on the right one with FFMpeg. So you may find the last second of the video may freeze. I've fixed that by encoding beyond the duration needed during an encoding process, then, once the video is converted, I'll do the cut with a second FFMpeg command.

Onto Part 3 - Coming soon!

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Introduction]]>https://ioroot.com/automated-instagram-posts/5fe394773b12550001bc7447Sat, 28 Dec 2019 07:13:12 GMT

I used to create quick parkour tutorial videos for instagram to post every now and again, and thought that I'd actually do it a little more seriously for my company LondonParkour. Good bit of content that I can post on a regular basis, with minimal effort.

This post is about the process that I finally came up with, after a lot of playing around and figuring things out... It became a little more complicated than I originally predicted.

Free, Automated Scheduling of Instagram Videos - Introduction

The lesser-known six circles of Dante's logistical hell are broken up as follows.

  1. Filming - Getting simple lavalier audio on a GoPro.
  2. Converting - BASHing away at automating simple video editing.
  3. Uploading - Google Driving me to jump hoops for a URL.
  4. Labelling - Not using post-it notes for storing the info so far.
  5. Triggering - Toppling that first domino.
  6. Posting - RED Alert! Change the bulb to a red one!

Each stage posed a challenge and required research and a little critical thinking, let's dive in and get started with...

Part 1 - Filming

]]>
<![CDATA[Free, Automated Scheduling of Instagram Videos - Part 1 - Filming]]>https://ioroot.com/automated-instagram-videos/5fe394773b12550001bc7446Tue, 24 Dec 2019 21:30:46 GMTGoPro 6 + iPhone 6S + AirPods + MykApp
Free, Automated Scheduling of Instagram Videos - Part 1 - Filming
GoPro!
Free, Automated Scheduling of Instagram Videos - Part 1 - Filming

I have a GoPro 6 that has the ridiculously idiotic design flaw of covering all the microphones up when it's in the case for screwing onto a tripod. Yep, that's right, all of the microphones get covered over by plastic from the case and therefore unusable for vlogs.

The solution? An external microphone. Easy right? Nope. Not even close.

You may have noticed that the GoPro has no jack-plug input and no bluetooth audio connectivity... That's because they want you to buy this god-awful thing:

Free, Automated Scheduling of Instagram Videos - Part 1 - Filming
GoPro Audio Adapter

Fuck that.

What a ridiculous piece of technology for an audio-in socket. Then you need to add the microphone itself and if you want a lavalier microphone, like I do, (one with a transceiver to wirelessly connect to a lapel microphone) you'd need all that plugged in also. I have an existing mini transceiver that works very well and thought I could connect it through bluetooth to a lapel mic. Β 

Then I started thinking about wiring my own jack-plug into a pre-existing mic on the GoPro. That was a no-go from the start because of the waterproofing, AD Converters, etc...

Finally, after a few days I started wondering if I could use my Apple Airpods somehow. They have Mics in them, are bluetooth, and I just needed a way to get that to connect to the GoPro. I was doing lots of reading online and came across a random post that mentioned an iPhone App called Myk App. So I checked it out:

Free, Automated Scheduling of Instagram Videos - Part 1 - Filming
Myk App

This was the answer... These are the droids I was looking for.

So here's what you can do.

  1. Connect your GoPro to your iPhone for audio and video recording
  2. Connect your AirPods through bluetooth to your iPhone for a second audio recording.
  3. Start recording audio / video on the GoPro and simultaneously record the audio on the iPhone through the airpods.
  4. Once finished, you can download the GoPro footage to the iPhone and then combine it with the audio recorded by the airpods!
  5. The mixing ability allows you to shift the audio of either recording forwards or backwards to synchronise to the video correctly. (The airpods recording tends to be 0.5secs behind, so I have to shift it slightly)
  6. Process the video with the new audio from the airpods and save it to your camera folder.
  7. Download all footage to the computer for next steps!
Free, Automated Scheduling of Instagram Videos - Part 1 - Filming

Oh, one downside though... it's not free. The app is free, but to use it properly you need to pay a $3.99 monthly subscription. I HATE these subscription models. But it was cheap enough for me to try it on a month basis, loved it, then bought the yearly ($19.99) subscription.

Fantastic. So I now have good footage from my GoPro, with good audio, courtesy of 'AI Motion, Inc'... Check out their app and have a play on Android or iOS.

Part 2 - Converting

]]>
<![CDATA[Testing this new blog]]>https://ioroot.com/testing-this-new-post/5fe394773b12550001bc7444Mon, 23 Dec 2019 21:11:31 GMT

Using ghost.org to create it. Github Pages to host it.

Deployment help from https://zzamboni.github.io/test-ghost-blog/hosting-a-ghost-blog-in-github-the-easier-way/index.html

Let's try some code highlighting. This is the first version of a simple script to create a static version from the localhost.

#!/bin/bash

ghost start

wget -q -r -nH -P static -E -T 2 -np -k https://ioroot.com/

echo "Done. See ./static directory to push to github."
]]>