<![CDATA[Upskilled.dev]]>https://upskilled.dev/https://upskilled.dev/favicon.pngUpskilled.devhttps://upskilled.dev/Ghost 4.6Wed, 26 May 2021 14:30:36 GMT60<![CDATA[Customizing your brand and design settings]]>https://upskilled.dev/design/60ae536ef2ceaacd35f1931fWed, 26 May 2021 13:56:03 GMT

As discussed in the introduction post, one of the best things about Ghost is just how much you can customize to turn your site into something unique. Everything about your layout and design can be changed, so you're not stuck with yet another clone of a social network profile.

How far you want to go with customization is completely up to you, there's no right or wrong approach! The majority of people use one of Ghost's built-in themes to get started, and then progress to something more bespoke later on as their site grows.

The best way to get started is with Ghost's branding settings, where you can set up colors, images and logos to fit with your brand.

Customizing your brand and design settings
Ghost Admin → Settings → Branding

Any Ghost theme that's up to date and compatible with Ghost 4.0 and higher will reflect your branding settings in the preview window, so you can see what your site will look like as you experiment with different options.

When selecting an accent color, try to choose something which will contrast well with white text. Many themes will use your accent color as the background for buttons, headers and navigational elements. Vibrant colors with a darker hue tend to work best, as a general rule.

Installing Ghost themes

By default, new sites are created with Ghost's friendly publication theme, called Casper. Everything in Casper is optimized to work for the most common types of blog, newsletter and publication that people create with Ghost — so it's a perfect place to start.

However, there are hundreds of different themes available to install, so you can pick out a look and feel that suits you best.

Customizing your brand and design settings
Ghost Admin → Settings → Theme

Inside Ghost's theme settings you'll find 4 more official themes that can be directly installed and activated. Each theme is suited to slightly different use-cases.

  • Casper (default) — Made for all sorts of blogs and newsletters
  • Edition — A beautiful minimal template for newsletter authors
  • Alto — A slick news/magazine style design for creators
  • London — A light photography theme with a bold grid
  • Ease — A library theme for organizing large content archives

And if none of those feel quite right, head on over to the Ghost Marketplace, where you'll find a huge variety of both free and premium themes.

Building something custom

Finally, if you want something completely bespoke for your site, you can always build a custom theme from scratch and upload it to your site.

Ghost's theming template files are very easy to work with, and can be picked up in the space of a few hours by anyone who has just a little bit of knowledge of HTML and CSS. Templates from other platforms can also be ported to Ghost with relatively little effort.

If you want to take a quick look at the theme syntax to see what it's like, you can browse through the files of the default Casper theme. We've added tons of inline code comments to make it easy to learn, and the structure is very readable.

{{#post}}
<article class="article {{post_class}}">

    <h1>{{title}}</h1>
    
    {{#if feature_image}}
    	<img src="{{feature_image}}" alt="Feature image" />
    {{/if}}
    
    {{content}}

</article>
{{/post}}
A snippet from a post template

See? Not that scary! But still completely optional.

If you're interested in creating your own Ghost theme, check out our extensive theme documentation for a full guide to all the different template variables and helpers which are available.

]]>
<![CDATA[Writing and managing content in Ghost, an advanced guide]]>https://upskilled.dev/write/60ae536ef2ceaacd35f1931dWed, 26 May 2021 13:56:02 GMT

Ghost comes with a best-in-class editor which does its very best to get out of the way, and let you focus on your content. Don't let its minimal looks fool you, though, beneath the surface lies a powerful editing toolset designed to accommodate the extensive needs of modern creators.

For many, the base canvas of the Ghost editor will feel familiar. You can start writing as you would expect, highlight content to access the toolbar you would expect, and generally use all of the keyboard shortcuts you would expect.

Our main focus in building the Ghost editor is to try and make as many things that you hope/expect might work: actually work.

  • You can copy and paste raw content from web pages, and Ghost will do its best to correctly preserve the formatting.
  • Pasting an image from your clipboard will upload inline.
  • Pasting a social media URL will automatically create an embed.
  • Highlight a word in the editor and paste a URL from your clipboard on top: Ghost will turn it into a link.
  • You can also paste (or write!) Markdown and Ghost will usually be able to auto-convert it into fully editable, formatted content.
Writing and managing content in Ghost, an advanced guide
The Ghost editor. Also available in dark-mode, for late night writing sessions.

The goal, as much as possible, is for things to work so that you don't have to think so much about the editor. You won't find any disastrous "block builders" here, where you have to open 6 submenus and choose from 18 different but identical alignment options. That's not what Ghost is about.

What you will find though, is dynamic cards which allow you to embed rich media into your posts and create beautifully laid out stories.

Using cards

You can insert dynamic cards inside post content using the + button, which appears on new lines, or by typing / on a new line to trigger the card menu. Many of the choices are simple and intuitive, like bookmark cards, which allow you to create rich links with embedded structured data:

Open Subscription Platforms
A shared movement for independent subscription data.
Writing and managing content in Ghost, an advanced guide

or embed cards which make it easy to insert content you want to share with your audience, from external services:

But, dig a little deeper, and you'll also find more advanced cards, like one that only shows up in email newsletters (great for personalized introductions) and a comprehensive set of specialized cards for different types of images and galleries.

Once you  start mixing text and image cards creatively, the whole narrative of the story changes. Suddenly, you're working in a new format.
Writing and managing content in Ghost, an advanced guide

As it turns out, sometimes pictures and a thousand words go together really well. Telling people a great story often has much more impact if they can feel, even for a moment, as though they were right there with you.

Writing and managing content in Ghost, an advanced guide

Galleries and image cards can be combined in so many different ways — the only limit is your imagination.

Build workflows with snippets

One of the most powerful features of the Ghost editor is the ability to create and re-use content snippets. If you've ever used an email client with a concept of saved replies then this will be immediately intuitive.

To create a snippet, select a piece of content in the editor that you'd like to re-use in future, then click on the snippet icon in the toolbar. Give your snippet a name, and you're all done. Now your snippet will be available from within the card menu, or you can search for it directly using the / command.

This works really well for saving images you might want to use often, like a company logo or team photo, links to resources you find yourself often linking to, or introductions and passages that you want to remember.

Writing and managing content in Ghost, an advanced guide

You can even build entire post templates or outlines to create a quick, re-usable workflow for publishing over time. Or build custom design elements for your post with an HTML card, and use a snippet to insert it.

Once you get a few useful snippets set up, it's difficult to go back to the old way of diving through media libraries and trawling for that one thing you know you used somewhere that one time.


Publishing and newsletters the easy way

When you're ready to publish, Ghost makes it as simple as possible to deliver your new post to all your existing members. Just hit the Preview link and you'll get a chance to see what your content looks like on Web, Mobile, Email and Social.

Writing and managing content in Ghost, an advanced guide

You can send yourself a test newsletter to make sure everything looks good in your email client, and then hit the Publish button to decide who to deliver it to.

Ghost comes with a streamlined, optimized email newsletter template that has settings built-in for you to customize the colors and typography. We've spent countless hours refining the template to make sure it works great across all email clients, and performs well for email deliverability.

So, you don't need to fight the awful process of building a custom email template from scratch. It's all done already!


The Ghost editor is powerful enough to do whatever you want it to do. With a little exploration, you'll be up and running in no time.

]]>
<![CDATA[Building your audience with subscriber signups]]>https://upskilled.dev/portal/60ae536ef2ceaacd35f1931bWed, 26 May 2021 13:56:01 GMT

What sets Ghost apart from other products is that you can publish content and grow your audience using the same platform. Rather than just endlessly posting and hoping someone is listening, you can track real signups against your work and have them subscribe to be notified of future posts. The feature that makes all this possible is called Portal.

Portal is an embedded interface for your audience to sign up to your site. It works on every Ghost site, with every theme, and for any type of publisher.

You can customize the design, content and settings of Portal to suit your site, whether you just want people to sign up to your newsletter — or you're running a full premium publication with user sign-ins and private content.

Building your audience with subscriber signups

Once people sign up to your site, they'll receive an email confirmation with a link to click. The link acts as an automatic sign-in, so subscribers will be automatically signed-in to your site when they click on it. There are a couple of interesting angles to this:

Because subscribers are automatically able to sign in and out of your site as registered members: You can (optionally) restrict access to posts and pages depending on whether people are signed-in or not. So if you want to publish some posts for free, but keep some really great stuff for members-only, this can be a great draw to encourage people to sign up!

Ghost members sign in using email authentication links, so there are no passwords for people to set or forget. You can turn any list of email subscribers into a database of registered members who can sign in to your site. Like magic.

Portal makes all of this possible, and it appears by default as a floating button in the bottom-right corner of your site. When people are logged out, clicking it will open a sign-up/sign-in window. When members are logged in, clicking the Portal button will open the account menu where they can edit their name, email, and subscription settings.

The floating Portal button is completely optional. If you prefer, you can add manual links to your content, navigation, or theme to trigger it instead.

Like this! Sign up here


As you start to grow your registered audience, you'll be able to get a sense of who you're publishing for and where those people are coming from. Best of all: You'll have a straightforward, reliable way to connect with people who enjoy your work.

Social networks go in and out of fashion all the time. Email addresses are timeless.

Growing your audience is valuable no matter what type of site you run, but if your content is your business, then you might also be interested in setting up premium subscriptions.

]]>
<![CDATA[Selling premium memberships with recurring revenue]]>For creators and aspiring entrepreneurs looking to generate a sustainable recurring revenue stream from their creative work, Ghost has built-in payments allowing you to create a subscription commerce business.

Connect your Stripe account to Ghost, and you'll be able to quickly and easily create monthly and yearly premium

]]>
https://upskilled.dev/sell/60ae536ef2ceaacd35f19319Wed, 26 May 2021 13:56:00 GMT

For creators and aspiring entrepreneurs looking to generate a sustainable recurring revenue stream from their creative work, Ghost has built-in payments allowing you to create a subscription commerce business.

Connect your Stripe account to Ghost, and you'll be able to quickly and easily create monthly and yearly premium plans for members to subscribe to, as well as complimentary plans for friends and family.

Ghost takes 0% payment fees, so everything you make is yours to keep!

Using subscriptions, you can build an independent media business like Stratechery, The Information, or The Browser.

The creator economy is just getting started, and Ghost allows you to build something based on technology that you own and control.

Selling premium memberships with recurring revenue
The Browser has over 10,000 paying subscribers

Most successful subscription businesses publish a mix of free and paid posts to attract a new audience, and upsell the most loyal members to a premium offering. You can also mix different access levels within the same post, showing a free preview to logged out members and then, right when you're ready for a cliffhanger, that's a good time to...

]]>
<![CDATA[How to grow your business around an audience]]>https://upskilled.dev/grow/60ae536ef2ceaacd35f19317Wed, 26 May 2021 13:55:59 GMT

As you grow, you'll probably want to start inviting team members and collaborators to your site. Ghost has a number of different user roles for your team:

Contributors
This is the base user level in Ghost. Contributors can create and edit their own draft posts, but they are unable to edit drafts of others or publish posts. Contributors are untrusted users with the most basic access to your publication.

Authors
Authors are the 2nd user level in Ghost. Authors can write, edit and publish their own posts. Authors are trusted users. If you don't trust users to be allowed to publish their own posts, they should be set as Contributors.

Editors
Editors are the 3rd user level in Ghost. Editors can do everything that an Author can do, but they can also edit and publish the posts of others - as well as their own. Editors can also invite new Contributors & Authors to the site.

Administrators
The top user level in Ghost is Administrator. Again, administrators can do everything that Authors and Editors can do, but they can also edit all site settings and data, not just content. Additionally, administrators have full access to invite, manage or remove any other user of the site.

The Owner
There is only ever one owner of a Ghost site. The owner is a special user which has all the same permissions as an Administrator, but with two exceptions: The Owner can never be deleted. And in some circumstances the owner will have access to additional special settings if applicable. For example: billing details, if using Ghost(Pro).

Ask all of your users to fill out their user profiles, including bio and social links. These will populate rich structured data for posts and generally create more opportunities for themes to fully populate their design.

If you're looking for insights, tips and reference materials to expand your content business, here's 5 top resources to get you started:

]]>
<![CDATA[Setting up apps and custom integrations]]>https://upskilled.dev/integrations/60ae536ef2ceaacd35f19315Wed, 26 May 2021 13:55:58 GMT

It's possible to extend your Ghost site and connect it with hundreds of the most popular apps and tools using integrations.

Whether you need to automatically publish new posts on social media, connect your favorite analytics tool, sync your community or embed forms into your content — our integrations library has got it all covered with hundreds of integration tutorials.

Many integrations are as simple as inserting an embed by pasting a link, or copying a snippet of code directly from an app and pasting it into Ghost. Our integration tutorials are used by creators of all kinds to get apps and integrations up and running in no time — no technical knowledge required.

Setting up apps and custom integrations

Zapier

Zapier is a no-code tool that allows you to build powerful automations, and our official integration allows you to connect your Ghost site to more than 1,000 external services.

Example: When someone new subscribes to a newsletter on a Ghost site (Trigger) then the contact information is automatically pushed into MailChimp (Action).

Here's a few of the most popular automation templates:

Custom integrations

For more advanced automation, it's possible to create custom Ghost integrations with dedicated API keys from the Integrations page within Ghost Admin.

Setting up apps and custom integrations

These custom integrations allow you to use the Ghost API without needing to write code, and create powerful workflows such as sending content from your favorite desktop editor into Ghost as a new draft.

]]>
<![CDATA[AWS IAM (Identity and Access Management) Introduction]]>

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

Umm.. okay?

So what this means essentially is that with IAM you

]]>
https://upskilled.dev/aws-iam-identity-and-access-management-introduction/60ae5a76f2ceaacd35f194afMon, 01 Jun 2020 17:38:52 GMT

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

Umm.. okay?

So what this means essentially is that with IAM you can give authorization to certain users to access an AWS Service / Resource. Imagine you’re using 2 services of AWS, an EC2 Server (for running your web application) and S3, for storing files. You might want to give access to the files (and hence S3), to a marketing employee. But of course, the marketing employee has no reason to access the web server. With IAM you can create a user account and give it access only to S3. 

And it gets even more granular. S3 stores files in something called buckets. You can even decide to give the user access to one bucket and deny access to another.

But wait, there’s more!

IAM also provides Identity Federation. Identity Federation allows users to login using social media accounts like Facebook, LinkedIn, etc.

It also gives you the option to use Multifactor Authentication, i.e. add an additional verification step like OTPs.

Heck, you can even give a user access to a resource temporarily! Cool, eh?

What? You want more security? Sure thing. Add a password rotation policy so the users have to change their passwords after every few days/months.

Also, the security provided is PCI DSS compliant. According to a google search, PCI DSS stands for “Payment Card Industry Data Security Standard.” I guess that speaks for itself.

Now you’re ofcourse not going to manually assign the same policies to each user. There are groups for that. Create a group and attach as many users to it as you want. And when an AWS service wants access to another AWS service, you guessed it, that’s possible too! Just create a role, attach it and give access to that resource. Easy as 1,2,3!

One last keyword to remember. Policies. Access to resources are given by creating what are called policies. A policy is simply a key value document (like a JSON), that defines what level of access to what resource is given. E.g. If you have a bucket profile-pictures,  you can say that you want to give access to the bucket profile-pictures in S3, and hence all other buckets remain private.

Alright, enough is enough. Let’s do the Lab!

]]>
<![CDATA[S3 Introduction]]>

“Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile

]]>
https://upskilled.dev/s3-introduction/60ae5a76f2ceaacd35f194aeMon, 01 Jun 2020 17:33:27 GMT

“Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.”

So you build you incredible new social network that’s going to take over the world. It has some amazing features that no other social network provides. However, there is one critical feature that you need to add, a provision to upload profile pictures. But here’s the problem, the storage on your EC2 instance is limited. It can’t store millions of user’s photos in there. Also, if you terminate the server that data is *poof* gone! You also can’t horizontally scale your application since the new servers won’t have the data stored on the primary EC2 instance.

The solution to all of your problems is S3. S3 is a service that allows you to store objects (files) that can be fetched later on. You can also store your backups on S3 and use them later. And don’t worry about losing your data either,

“Amazon S3 is designed for 99.999999999% (11 9’s) of durability”

Files are stored under buckets and your bucket names are unique throughout the world. You can’t use the same bucket name even in 2 different AWS accounts.

There are a few storage classes provided. You can decide which one is appropriate for your use case, and hence save costs on storing objects that are less frequently accessed. The classes are as follows:

  1. S3 Standard
  2. S3 Intelligent Tiering
  3. S3 Standard-Infrequent Access (S3 Standard-IA)
  4. S3 One Zone-Infrequent Access (S3 One Zone-IA)
  5. S3 Glacier (S3 Glacier)
  6. S3 Glacier Deep Archive (S3 Glacier Deep Archive)

Let’s take a look at the different classes

S3 Standard

This service provides low latency and high throughput, making it the obvious choice for frequently accessed data. Think about data that is currently used in you application. E.g. profile pictures of users.

S3 Intelligent Tiering

Now, if you’re uncertain of a way of classifying your data to put it into appropriate classes, AWS can do it for you. The S3 Intelligent Tiering makes sure it puts it in the appropriate class based on its access requirements to optimize costs. If an object is less frequently accessed, it will automatically put it in a less frequently accessed storage tier, else in standard.

S3 Standard-Infrequent Access (Standard-IA)

This class provides storage for data that is less frequently accessed but requires rapid access when requested for. It provides high durability, high throughput and low latency. You will be charged to access this data, so you can’t trick AWS. 😏

S3 One-Zone Infrequent Access

Imagine you have data that isn’t frequently accessed, and doesn’t require to be backed up in multiple zones. The other services will store your files on minimum of 3 Availability Zones, while One-Zone, as you may have guessed by now, stores it in a single zone. Hence, you get 20% off of your purchase! Order Now! Heh..😶

S3 Glacier

S3 Glacier is useful for data archiving. You know, storing files that you MAY require some time later. The point of Glacier is to give you cheap storage for files that can be fetched in minutes to hours (3 options). This isn’t useful for data that when requested, loads instantly. You can also configure it to automatically move your objects around between active and archive storages.

S3 Glacier Deep Archive

This is the lowest you can go, in terms of cost. This is obviously for data that is accessed very very infrequently, like once a year or 2 years, maybe 5. You could imagine this is data that was collected years ago, and is deep within your application. Nobody really goes back there, but since it is possibly important or even critical data, you have it stored on S3. Data stored in this service will take a few hours to fetch.

Let’s Talk About The Money 💰

Charges are applicable for the following:

  1. Storage Used (per GB)
  2. Number of Requests (per 1000)
  3. Data Transfer (per GB)
  4. Management & Replication

There are separate charges for different levels of uses. As your usage increases, your cost per GB will reduce. To view and analyse your use case, head over to the pricing section of AWS.

The free tier includes 5GB of Amazon S3 storage in the S3 Standard storage class; 20,000 GET Requests; 2,000 PUT, COPY, POST, or LIST Requests; and 15GB of Data Transfer Out each month for one year.

]]>
<![CDATA[Elasticache Introduction]]>

“Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores.”

Umm.. okay?

Before we actually

]]>
https://upskilled.dev/elasticache-introduction/60ae5a76f2ceaacd35f194adMon, 01 Jun 2020 13:13:32 GMT

“Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores.”

Umm.. okay?

Before we actually understand Elasticache, let’s look at what in memory databases are.

So traditionally a database stores data in a storage device like the Hard drive or SSD of the computer. Now, the difference between storage (HDD, SSD) and memory (RAM) is that RAM is volatile memory that gives very fast access and storage is non volatile and is slow to access. What this means is that fetching data from the database is slower than any data that is stored in memory. In memory database stores data in the memory (RAM) for faster Read/Writes. 

You would of course not use in-memory databases as primary databases, since firstly, memory is expensive. 16GB of RAM would come for the price of a 500GB or maybe even a 1TB Hard drive. Secondly, they’re volatile. Once you switch off the system, all data is lost. In memory DBs are used for frequently accessed data such as leaderboards or even sessions, which if lost, can be recreated.

Amazon Elasticache is a service provided by AWS that lets you run in memory databases in the cloud. There are two main players in this space, Memcached and Redis, and both are supported by Elasticache.

That’s all for today folks. See you in a new episode of…AWS! Take care, stay safe! 👋

]]>
<![CDATA[AWS RDS Snapshots And Read Replicas Explained]]>

AWS RDS is a database service that is used to manage relational databases. When managing a database, there is an obvious requirement to backup the data generated in a timely fashion. RDS provides the feature to backup your databases automatically and manually.

“By default, Amazon RDS creates and saves

]]>
https://upskilled.dev/aws-rds-snapshots-and-read-replicas-explained/60ae5a76f2ceaacd35f194acMon, 01 Jun 2020 13:11:54 GMT

AWS RDS is a database service that is used to manage relational databases. When managing a database, there is an obvious requirement to backup the data generated in a timely fashion. RDS provides the feature to backup your databases automatically and manually.

“By default, Amazon RDS creates and saves automated backups of your DB instance securely in Amazon S3 for a user-specified retention period. In addition, you can create snapshots, which are user-initiated backups of your instance that are kept until you explicitly delete them.”

There are two main types of backups, Automated Backups and Database (Manual) Snapshots. These snapshots are snapshots of your entire database, not just the tables. Automated Backups are created every time the retention period is passed. E.g. if the retention period is 24 hours, a backup will be taken automatically every 24 hours.

You can use backups to perform point-in-time restores. What this means is that you can go back to a state which was back in time to the very second using its transaction logs.

Copying snapshots is also possible. You can create a snapshot and copy that to have two individual copies of that snapshot. And these can be shared within regions, across them and across AWS accounts as well!

The backups are stored on S3 and if you’re on the free tier, you get 20Gigs of backup storage free! 🎊

In the following parts of this article, I’m assuming you have already set up your RDS Database. If you haven’t, check out and follow along our RDS Create A Database article to get up and running with an RDS instance

Creating A Snapshot

To create a snapshot, you can go to Services > RDS > Databases, select your database and click on Actions. Here you can click on Take Snapshot.

All you need to do is give your snapshot a name, click on Take Snapshot.

Viola! Head into the Snapshots section of RDS and you’ll see your snapshot there. Well, it’ll take some time to complete (depending on how large your data is), but it’ll be available.

Creating A Read Replica

Head over to databases under RDS, select your database and click on Actions > Create read replica. Here you’ll see more or less all the options you would see when creating a new database, if you need more details on the options, check out our article on RDS Create A Database.

You can choose to have a Multi-AZ deployment as well for disaster recovery. Not just that, you can choose to deploy the read replica in a different region. This could be useful if say your website is live in two countries and you want to reduce the latency and provide read access by having a read replica in one country and the source DB in another.

Restoring To A Point In Time

Head over to databases under RDS, select your database and click on Actions > Restore to a point in time.

Notice that it says Launch DB Instance. This is because it will launch a new DB instance with a new DNS name that you need to connect to. So if you have an application that you need to connect to the database, then you’ll need to update the connection URL.

You can restore the database (create a new instance) to a point in time to the exact second that you select. 🤯

Multi AZ Deployments

Multi AZ means Multi Availability Zone deployment. Multi AZ deployments are used for extreme failover situations, like if the AWS Availability Zone literally catches fire, or there is a natural disaster. 🌪

If you select your DB instance and click modify, you’ll be able to set Multi AZ deployments to yes. However they are chargeable even if you’re using the free tier.

That’s all folks! Tune in again or check out other articles on the blog to be the AWS Pro 👨‍💻

See ya later! 👋

]]>
<![CDATA[RDS Connect EC2 To RDS MySQL]]>

Hey there!

In this guide we will set up a MySQL database on AWS Relational Database Service (RDS) and connect it to a new EC2 instance. If you already have an EC2 instance set up or both EC2 and RDS set up, but are unable to link the two, you

]]>
https://upskilled.dev/rds-connect-ec2-to-rds-mysql/60ae5a76f2ceaacd35f194abMon, 01 Jun 2020 13:09:11 GMT

Hey there!

In this guide we will set up a MySQL database on AWS Relational Database Service (RDS) and connect it to a new EC2 instance. If you already have an EC2 instance set up or both EC2 and RDS set up, but are unable to link the two, you can still follow along and you’ll surely get some useful knowledge along the way!

Without further ado, let’s get cracking! 🚀

Head over to Services > EC2 > Instances. Click on Launch Instance to create a new EC2 instance.

I’m utilizing the free tier to its fullest, so I’m going to check Free Tier on the sidebar and select Amazon Linux AMI. You may choose whichever you like.

NOTE: Moving ahead in the article we’ll set up Node.js to test our DB connection. If you choose a different flavor of Linux that step will likely be a little different that what I do in this guide. But it should be an easy google search.

Again, out of the several options provided, I choose to go for the free tier. You may choose what’s appropriate for your purpose. For the sake of learning/practicing, the free tier server is more that enough. I’m not configuring this one, if you’d like to know more about EC2 configurations, check out our EC2 Setup guide.

Once everything is configured, go ahead and click Launch.

If you have a key pair, you can use the same or create a new key pair. Make sure you download the key pair and do not lose it. Once chosen your option, click Launch Instances.

Next up, we need to create a database instance with RDS. You can do so by going to Services > RDS (Under Database) > Databases. Now Click on Create database.

My weapon of choice is MySQL, however you can pick whichever you like. Note that Aurora and Oracle do not have free tiers, so be careful if you don’t want to be charged. You can choose a specific version but I’m going to stick to the default.

Here make sure you choose Free Tier if that’s what you’re going for. Add an DB instance identifier that’ll be used by AWS to identify your RDS Instance. Add your credentials as well. These are the database credentials which you’ll use to connect to your database.

Under Additional Options, you may enter a database name that AWS will use to create a default database for you.You can disable backups if you like, however backups upto 20GB are free in the free tier, at least as of writing this article, it doesn’t really matter as such. I’ll leave the rest as the defaults.

Now let’s take a peek at our server. It should be ready by now. One thing to pay attention to is that it doesn’t have an IAM role, which will be needed to give EC2 permissions to access to the  AWS RDS service. Let’s create an attached one. Make sure you’re in the same VPC as the EC2 server and click Create database.

Select your server, click on Actions > Instance Settings > Attach/Replace IAM Role.

If you have already created an IAM role for EC2 that provides RDS access, you may choose that. For the purposes of this article, I’ll create a new one and attach the same. Click Create new IAM role.

A new tab will open up showing the following screen. Let’s click on Create role.

Select AWS service and EC2, and then click Next: Permissions.

Look for RDS. There are several roles that you can choose. I’m not going into the details of the different options and simply choose RDSFullAccess and clicking Next: Tags.

Give the role a name! 👶

Once created, go back to the previous tab and click the small refresh icon next to the dropdown. This should refresh the list of IAM roles. Select the new role that you just made.

Now go back to EC2, select your instance, and click Connect. Copy the SSH string and paste it in your terminal. Make sure you’re in the same folder as your ssh key (.pem file).

You’ll be prompted to trust this host (the new EC2 server), type yes.

Enter the following:

aws rds describe-db-instances –region your-region

You can find your region code here.

If you can see at least one DB instance, your IAM role is working fine. If not, check if the database instance is up and if there is an error check if your IAM role is configured correctly.

Next, let’s install Node.js. (You may skip this step but you need to update your security group as explained below).

Steps to install Node.js can be found in the AWS Documentation as linked here.

Create a new folder and switch to it.

mkdir testApp

cd testApp

npm init

Create an index.js file and open it in your favorite editor. I’m using nano here.

Paste the sample code from the mysql npm package page:

var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : 'me',
  password : 'secret',
  database : 'my_db'
});
 
connection.connect();
 
connection.query('SELECT 1 + 1 AS solution', function (error, results, fields) {
  if (error) throw error;
  console.log('The solution is: ', results[0].solution);
});
 
connection.end();

Before we run our code we need to install mysql driver from npm. (Don’t tell anyone I had forgotten to do so 🤫)

Go to your database instance and find the DNS name. Copy it and we need to update the index.js file with the correct URL and credentials.

Once that is done, save and exit the editor.

Pro Tip: The server and database are terminated so you cannot “hack” into them.

If you try to run the code now, it will return an error. This is because even though both our EC2 server and the database instance are in the same VPC, they have different security groups. Services in different security groups cannot reach each other. Think of it as a firewall.

Open your database instance in the AWS console and find the VPC Security Groups section under security (at the right).

Open the Security Group attached and then click on Edit Inbound Rules.

Switch back to your EC2 tab to check the name of its Security Group. In my case it’s launch-wizard-4.

Add a new rule, Select MySQL/Aurora from the dropdown and in Source, find the Security Group you saw in the EC2 server config. Type sg- and you will see all your security groups listed. Select the correct one and click Save Rules.

That’s it! Run you node app to see your app connect to the database ⚡️

Thanks for going through this article. Lemme know your thoughts/questions below. See ya later Alligator! 👋

]]>
<![CDATA[RDS Create A Database]]>

Welcome to this article. In this article we’ll go ahead and create a new database with Amazon Relational Database Service (RDS). This database is going to be a MySQL database, but you can follow along to create really any of the options available in RDS.

Let’s

]]>
https://upskilled.dev/rds-create-a-database/60ae5a76f2ceaacd35f194aaMon, 01 Jun 2020 13:03:51 GMT

Welcome to this article. In this article we’ll go ahead and create a new database with Amazon Relational Database Service (RDS). This database is going to be a MySQL database, but you can follow along to create really any of the options available in RDS.

Let’s go! 🚀

To begin, head over to your AWS Console and click on Services > RDS (Under Database).

Since I don’t have any databases created, it shows a Create Database button at the top, but you can also see a create section below. In any case, click on Create Database.

Here you’ll see several databases listed, and by default, Amazon Aurora is selected. You can choose whichever you prefer, I’m going to go with MySQL.

You can then choose the version of your database. If you would like to use a specific version, you can select it from the dropdown. AWS also conveniently gives us a link to known issues/limitations, if you’d like to know more about the version.

You can also choose the template, whether production, test or free tier. I am currently on the free tier to I am going to choose the same.

NOTE: Free tier is not available for Aurora and Oracle

You then need to scroll down and enter credentials that will be used to access the database.

DB instance identifier is simply an id used by AWS and is can insensitive. Enter a username and password that will be used to connect to the database instance.

Next you can select the server instance class. If you’ve selected Free tier you’ll only have this one option available.Below that you have storage details. You can choose to auto scale the storage and set a maximum size.

Choose whether you want disaster recovery of your database via Multi-AZ deployment. Multi-AZ stands for Multiple Availability Zones. So, each city that has AWS servers usually has multiple availability zones, in case a building goes through a catastrophic failure. If your application requires high availability, then you might want to go with Multi-AZ, but do note that it will increase your server costs.

You may also select your VPC (Virtual Private Cloud), so that services in that cloud can access the database. 

By default AWS keeps the database private, so it cannot be accessed from outside the VPC. It won’t even have a public IP address, so you cannot connect to it directly. You may change this in advanced settings, but make sure you attach a security group (or create one) that only allows incoming connections from trusted computers (including AWS services). You can also make it available to a specific subnet.

Choose a specific AZ or let AWS choose it, and set the port number to be used to access the database.

On scrolling further you can view Additional Configuration to set the default database name to create one and automatic backups.

If you do choose to keep backups, it will give you an option to choose how long the backups are retained. Storing backups is chargeable, however with the free tier you tend to get 20GB of storage for backups and 20GB for the database.Select a window for how frequently you want the backups to happen, or keep no preference. 

You can choose to enable enhanced monitoring and enable logs to get more details about the instance and track down issues if something goes wrong.

Finally, you can select maintenance options such as minor version upgrades and maintenance frequency. You can also Enable Deletion Protection so that the delete option is disabled. You need to come back here and enable it again to be able to delete this database.

AWS also lists estimated costs below, which is very convenient. In my case since I’m using free tier, it just lists the benefits I have in free tier.

Once all options are configured, just scroll down and click Create Database. It takes some time to get up and running, so you can check back in a few minutes.

Congrats, you’ve got your database up and running! But there’s more work to do to complete your startup. Next up, let’s connect our EC2 instance to RDS!

Check out the next article to set it up! I’ll meet you there, bye! 👋

]]>
<![CDATA[RDS Introduction]]>

“Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications

]]>
https://upskilled.dev/rds-introduction/60ae5a76f2ceaacd35f194a9Mon, 01 Jun 2020 13:00:57 GMT

“Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.”

So AWS RDS or Relational Database service is a service for running your databases. You can run the following databases on it:

  1. Amazon Aurora
  2. PostgreSQL
  3. MySQL
  4. MariaDB
  5. Oracle
  6. Microsoft SQL Server

But why another service? Can’t we just run it on EC2?

Well, it allows you to easily set up, patch and back up your database. All you need to do then is connect to your database, setup the schema and start executing your queries.

Next up, let’s create a database. See you there! 👋

]]>
<![CDATA[AWS CLI Configure on EC2]]>

The AWS CLI is a very useful tool to access other AWS services from your EC2 servers or even your laptop/computer. In this article we’re going to take a look at AWS CLI and set it up.

Before you start using the CLI, you need to create

]]>
https://upskilled.dev/aws-cli-configure-on-ec2/60ae5a76f2ceaacd35f194a8Mon, 01 Jun 2020 12:58:48 GMT

The AWS CLI is a very useful tool to access other AWS services from your EC2 servers or even your laptop/computer. In this article we’re going to take a look at AWS CLI and set it up.

Before you start using the CLI, you need to create a user through IAM and get the Access Key ID and Secret Access Key to authenticate and authorize yourself to access those services. This user should have Programmatic Access. We have an article on creating users through IAM and giving them permissions. Be sure to go through that article if you need help creating the user.

NOTE: You can instead use roles (recommended) to give access to your EC2 instances to access other AWS services. More on that in the IAM Roles article.

Alright then, let’s get started! 🚀

Login to your AWS console and head over to Services > EC2 > Instances. Click on Connect to see the SSH connection string.

Copy the connection string and paste it in your terminal. Make sure you’re in the directory where your ssh key (.pem file) resides. Once logged into your EC2 instance enter 

aws --version

This will show you the version of AWS CLI. This basically shows you whether AWS CLI is installed or not. 

Before we start using the CLI, we have to configure it. So enter 

aws configure

As I had mentioned at the start of this article, we need the Access Key ID and Secret Access Key. If you’ve followed our IAM Create Users guide, you’ll have downloaded a csv file. This file contains the details separated by commas. The second and third items in the file are the Access Key ID and Secret Access Key respectively.

NOTE: I used Raven here but you should use Sam, as Raven only has Marketing access. Basically for this tutorial use one that has S3 access to create and read.

You can set the default region and output format if you like, I’m simply going to skip that.

If you now do 

aws s3 ls

It should show you your s3 buckets. In my case there is nothing here so it doesn’t show anything.

We can create and view the bucket to test it out.

aws s3 mb s3://bucket_name

aws s3 ls

Congratulations, Sam is now able to access the AWS services through AWS CLI! 🎉

Although, as I have mentioned before, if your purpose is to provide permissions to access AWS services, use roles instead. To set up AWS CLI on your laptop or computer, go to the AWS CLI download page and download the AWS CLI for your OS.

There’s so much more that you can do with the CLI, it’s incredibly powerful. To get a list of all the different magical commands, head over to the documentation of AWS CLI.

That’s it for taking time to read this article. Lemme know if you have any questions down below. I’ll see you around! 👋

]]>
<![CDATA[Route 53 Introduction]]>

“Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP

]]>
https://upskilled.dev/route-53-introduction/60ae5a76f2ceaacd35f194a7Mon, 01 Jun 2020 12:53:52 GMT

“Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.”

I think this definition is pretty clear. Route 53 is a DNS service. What is a DNS service? It is a service that translates domain names into IP addresses, and as you may know, computers are better at working with IP addresses than they are working with names. However, we mortals prefer names instead, as they are easier to remember.

When would I use Route 53?

Well, imagine you wanted you host your shiny new website. You’ve started a small business, or maybe even a startup 🦄 and you wish to launch you website to AWS. You followed the EC2 tutorial on our blog to create your new server. However it leaves you with a disgustingly long URL which doesn’t have anything to do with your brand name. What do you do? You head over to Route 53, and you purchase a new domain name! (Well, theoretically you buy one anywhere but for the purposes of this guide let’s assume you buy one on AWS).

You can also bind your domain name to an S3 bucket to give your users/developers a fancy schmancy url to access files.

That’s it. That’s the article. I’m not going to go much deeper into this service for now.

See ya! 👋

]]>