How to run NFS in Microsoft Azure – The missing piece of the stack

loudaboutcloud-logo

Microsoft Azure service offerings grow on a daily basis, therefore it’s no wonder that it can get quite time-consuming to look into the lower level details. For one such customer of mine, their on-premise applications were architected with NFS, however, they hadn’t realised that NFS is not available as a service in Azure.

Services in Azure (credit: https://azureplatform.azurewebsites.net/en-us/)

This is a common challenge that arises, NFS as a service is currently not available in the Azure portal. Microsoft has historically recommended SMB3 (CIFS) for networked file based storage and this is mirrored in the current service offerings from Azure.



But what if your architecture is currently reliant on NFS, such a most UNIX, Linux architectures? Do you re-architect the solution and change your operational processes? Sometimes this is isn’t an option as application vendors specify supported configurations.

Using ONTAP Cloud for Azure is an intelligent way to provide enterprise-class NFS services that consume Azure native resources. ONTAP Cloud runs as a VM and consumes Azure premium or standard storage. This brings a number of benefits:

  1. ONTAP will provision, manage and report on your Azure storage and can be controlled via REST API, Azure Workflow, PowerShell and CLI.
  2. Existing enterprise applications that use NFS, iSCSI or SMB3 can be migrated to Azure with no change or protocol. This also allows easy migration to and from Azure.
  3. By removing the dependency on locally attached cloud storage you can implement a multi-cloud data strategy.
  4. You can operate a full hybrid cloud strategy, allowing you to move or failover to and from Azure with ease using NFS as the common protocol across locations.
  5. Storage efficiency – at time of writing there are no storage efficiencies available across native cloud storage services. Using ONTAP you are able to reduce your storage footprint and save money via deduplication and compression technologies, in turn reducing your cloud consumption spend.

ONTAP Cloud is available in the Azure Marketplace and can be deployed in minutes. It also comes with a free license for the first 30 days.

Please follow and like us:
onpost_follow

Backup for Office 365: It’s already included…. Isn’t it?

Over the last year, I’ve seen a significant uptake in businesses moving from on-premise Exchange environments to Office 365 and it makes absolute sense. When it comes to a messaging there is hardly any difference (in terms of business value/competitiveness)  whether you run it yourself or consume it a service.

But one area, in particular, does come into play; backup & restore.

Firstly let’s start with the definition of a backup:

An independent copy of my data that can be restored if the source system or service is unavailable.

It’s pretty hard to argue with that definition but, I understand that many will have their own derivatives of this.



Now let’s look a typical on-premise enterprise estate, most have Exchange and plenty more have tape or disk-based backup appliances, keeping data anywhere from 1-7 years (and a few outliers that refuse to delete anything, LTO2 anyone?).

So why, why did we spend all that time and money on backup in the first place.

Well actually – it wasn’t about backup, it was about the restore capability.

As an exchange admin in a past life, I had to be able to tell the directors that I could restore the whole system or individual emails on demand, for however long the business required.

With this in mind, let’s take a look at the native O365 capabilities – what do you get for your £17.60/mo per user? (E3 is the minimum subscription offering hold capabilities).

Firstly, we have deleted items, this is handy – users inadvertently delete something and can restore these items with a simple click & drag operation. You can even configure this to have unlimited retention (14 days by default). Fantastic!

But, what if a user wants to make sure something is no longer in the system – they can simply delete things from deleted items, so please don’t confuse this as data protection, it’s simply an end user benefit. It also relies on O365 being online – if the service is offline, you don’t have access to your emails or any deleted items.

Let’s carry on with our scenario, our fictional user has deleted their items from both their inbox and the deleted items folder, what happens next?

Within O365 another recovery folder exists suitably named “Recoverable Items”.

This folder can hold items for up to 30 days (14 by default). Any item that exceeds this duration is lost to the depths of the cloud. The one thing to note is that users can purge their own “Recoverable Items” folders.

So – surely Microsoft have thought about this? Well, yes & no. Microsoft’s answer to this scenario is litigation hold. This will copy all of the user’s emails to an immutable area (hidden away from users in “Recoverable Items”). There was also the option of doing this “in-place”, however, this is heading the way of the Dodo and I wouldn’t suggest deploying it today:

We’ve postponed the July 1, 2017 deadline for creating new In-Place Holds in Exchange Online (in Office 365 and Exchange Online standalone plans). But later this year or early next year, you won’t be able to create new In-Place Holds in Exchange Online.

This is a shame as Litigation hold doesn’t support public folders, so if you need these backing up – you’ll need a third-party solution.

Many companies require a separation of roles as a security standard. In this scenario, the O365 administrator could (rightly or wrongly) assign themselves the “eDiscovery Manager” rights and have full access to search and export from Exchange mailboxes, SharePoint folders and OneDrive locations. The admin could even modify the litigation hold policies.

This is one of the key reasons why many businesses opt to use a third-party backup integration with O365. Such solutions regularly include role-based access control and auditing, that help companies to comply with current and incoming data protection laws, whilst also allowing a different department or administrator to hold the rights for restores.

In addition, many clients insist on a recoverable offline copy of their O365 data – even in another cloud provider (AWS S3 anyone?). This is truly the only way to protect from data corruption (Microsoft explicitly state that point-in-time restore of data is not in the scope of O365).

So in summary, if you are looking for an independent offline backup, public folders or additional separation of security, you’ll need a third-party backup tool. If not, then use what you have in your (E3/E5) subscription.

Now it’s no secret I work as a Cloud Solutions Architect as my day job, check out their Backup-as-a-Service offering for O365 free of charge for 30 days.

It allows granular restore across Exchange Online, SharePoint Online and OneDrive for Business (with more in future) with no agents, no installs and no infrastructure for you to manage – 100% Software-as-a-Service (SaaS). Most importantly – you don’t have to have any NetApp storage to use this offering.

If you have less than 500 users you can purchase directly from AWS Marketplace.

Please follow and like us:
onpost_follow

Cost analysis: Storage costs in Azure

Following on from my previous post about storage costs in AWS, I had a few people say “Hey Kirk, liked the post, could you show this for Azure please?”. So here it is.

Let’s take the same business scenario I worked on last week (5 x MySQL Servers) and use Azure services instead of AWS, here’s what it looks like from an architectural view:

The business I was working with uses AWS, however, I wasn’t that surprised to find that the same challenges exist in Azure:

  1. Cost – Storage represents 94% of the MySQL estate cost
  2. Performance – Azure provides disks with varying performance characteristics. The larger the disk, the higher the performance. Therefore with smaller workload footprints, you would purchase more capacity that required to meet your performance needs.
  3. Cloning – Azure stores snapshots in blob storage. However, in Azure, there is a limitation of a maximum snapshot size of 10TB per storage account. This would rule out the use of snapshots in Azure for this particular customer (as their DB is just over 10TB).

So let’s do the appropriate cost modelling:

From the modelling, we can see that the Azure storage cost represents 94% of the MySQL estate cost, pretty much identical to AWS as a percentage, but overall more expensive (AWS:£5503/mo vs Azure:£6190/mo).

So, it’s no secret that I work for a company that develops a cloud storage platform that overlays native premium/standard storage in Azure and adds storage efficiencies such as deduplication and compression, plus the added advantage of instant cloning, with no additional space consumed.

So here’s how that same scenario looks with the ONTAP Cloud storage running:

The first thing you’ll notice is that you no longer have to manage to premium storage disks. ONTAP takes care of this for you. That means no worrying about raid types, capacity or performance on a host-to-host basis. Win.

The second benefit is reduced costs. ONTAP brings deduplication and compression to the party. Your mileage will vary depending on your data’s suitability for storage efficiencies but it’s not uncommon to see >30% space savings. Win.

If we apply that logic to this scenario, the resulting storage expenditure will be reduced from £5,830/mo to £4,180/mo. That’s a saving of £1,650/mo representing a 28% reduction of the overall expenditure.

But wait, this all assumes that the customer is not using the instant cloning functionality that ONTAP has. This scenario has two servers (dev/test) that use copies from the main prod database (not uncommon).

So what does this look like if the business was to use both storage efficiencies and cloning for two dev/test databases?

The storage costs have reduced even further, from the original £5,830/mo to £2,310/mo, representing a saving of just under 40%. In addition, the customer now benefits from instant on demand cloning of their databases.

So this is looking pretty impressive, but of course, ONTAP has a licencing cost, what would our final savings look like once we take that into account? I’m basing this on the pay & go option (most expensive) over 12 months:

ONTAP Cloud Pay & Go (DS13V2) / 12 months = $14,731.20 ~ £11,567.30/year

Therefore the customer would be in line for a total saving of £30,672.70/year, saving almost 44% of their overall Azure cost, pretty impressive!

This solves the cost problem for the customer and provides the capability to perform instant cloning of their databases.

Not only that, ONTAP cloud has APIs, PowerShell, Kubernetes and CLI management, meaning that it can be seamlessly integrated into their developer toolsets and processes.

So, the customer is now able to rapidly develop in Azure with on-demand clones and further reduce their storage footprint with storage efficiencies.

So, it’s a no-brainer – try it today – free of charge for 30 days here.

Please follow and like us:
onpost_follow

Cost analysis: Storage costs in AWS

Earlier this week I was working with a cloud business to review their expenditure in AWS. The business had advised me of 3 challenges around their storage:

  1. Cost – EBS storage represented over 90% of the MySQL estate cost.
  2. Performance – The business was regularly purchasing more storage capacity than they needed (AWS EBS provides a baseline performance of 3 IOPS per GiB).
  3. Cloning – Cloning large 10TB databases resulted in reduced performance of the destination EBS volume whilst the blocks were rehydrated from the source S3 snapshot. It has taken 5 days in the past until full throughput was available to the system.

This was a really interesting experience that I’ve learnt a lot from, so I decided to take it further and do some modelling of the costs using 5 x MySQL servers each with 11TB of EBS storage attached:



In my modelling, the storage cost represented 94.3% of the total cost of running the 5 MySQL servers.

Now it’s no secret that I work for a company that develops a cloud storage platform that overlays native EBS storage and adds storage efficiencies such as deduplication and compression, plus the added advantage of instant cloning with no performance warm up time.

So, here’s how that same architecture looks with the ONTAP Cloud storage running:

This first thing you’ll notice is that you no longer have to manage the EBS volumes, ONTAP takes care of this for you. That means no worrying about raid types, capacity or performance on a host-to-host basis. Win.

The second benefit is reduced costs. ONTAP brings deduplication and compression to the party. Your mileage will vary depending on your data’s suitability for storage efficiencies but it’s not uncommon to see >30% space savings. Win.

If we apply that logic to this scenario, our resulting storage expenditure will be reduced from £5,188.92/mo to £3,329.56/mo. That’s a saving of £1859.36/mo!

But wait, this all assumes that I am not using the instant cloning functionality that ONTAP has. This scenario has 2 development servers (dev/test) that currently use EBS snapshots (S3). This is costly as each clone of the 10TB source database results in the full space consumption and costs from AWS. In comparison, clones created with ONTAP consume no additional space, therefore the cost of clones is significantly reduced. If the customer changes anything within the clone, those writes are the overall storage cost – again, significantly less that doing a full copy based clone from EBS to S3 back to EBS.

So what does this now look like if we start to use storage efficiencies AND instant cloning for the two dev/text databases?

The storage cost has reduced even further, from our original £5,188.92/mo to £1997.73/mo! That’s a saving of £3,191.19/mo or £38,294.28/year.

That’s all well and good, but of course ONTAP has a licencing cost, what would our final savings look like when we take that into account. I’m basing this on the pay & go licencing (most expensive option) over 12 months:

ONTAP Cloud Pay & Go (R3.2xlarge) / 12 months = $19,307 ~ £15,183.22/year

Therefore, the customer would be in line for a total saving of £23,111.06/year, saving almost 35% of their overall AWS cost, pretty impressive!

This solves the cost problem for the business I was working with, and also fixes the performance issue that they currently experience when working with native AWS cloning.

Not only that, ONTAP cloud has APIs, PowerShell, Kubernetes and CLI management, meaning that it can be seamlessly integrated into their developer toolsets and processes.

So, the customer is now able to rapidly develop in AWS with on-demand clones and further reduce their storage footprint with storage efficiencies.

So, it’s a no-brainer – try it today – free of charge for 30 days here.

Please follow and like us:
onpost_follow

Amazon Linux: Migrate MySQL from EBS to iSCSI

This week, I was working with a 100% cloud business, who have been using  AWS from their incorporation over 5 years ago.

The business wanted to take advantage of the instant cloning capabilities offered by running ONTAP in AWS and migrate the existing MySQL databases to the new cloud storage system.

These steps will get you up and running quickly, including some handy gotchas that you may run into.

First, I assume that you already have your Amazon Linux (or other instance) up and running and that you have your iSCSI devices mapped and mounted in your instance. Don’t worry if you haven’t – I created a handy post here that covers it.

Let’s check the current location of your MySQL datadir, there are two methods for checking this:

Method #1: Check within MySQL

mysql -u root -p

select @@datadir;

Method #2: Check from the console and look at the current value of datadir (for example datadir=/var/lib/mysql)

sudo cat /etc/my.cnf

Now that we know the current data directory we can move the database to the new location with these steps. In this example, my iSCSI device is mounted to /mnt/ontap

Shutdown the MySQL service (this ensures data and application consistency):

sudo service mysqld stop

Next, copy the database to the new location:

sudo rsync -av /var/lib/mysql /mnt/ontap

Once complete it’s best to rename the old directory to .bak to prevent confusion:

sudo mv /var/lib/mysql /var/lib/mysql.bak

Now it’s time to change the working directory path to the new one:

sudo nano /etc/my.cnf

datadir=/mnt/ontap

It would seem natural at this point to start the service, however, there is at least one more step:

(Optional: If you are using AppArmour):

sudo nano /etc/apparmor.d/tunables/alias

At the bottom of the file, add the following alias rule:

. . .
alias /var/lib/mysql/ -> /mnt/netapp1/mysql/,
. . .

Restart the AppArmour servicesudo

sudo service restart apparmour

Now for the final step, MySQL runs an environment check script upon service start. The script simply checks for the existence of /var/lib/mysql and /var/lib/mysql/mysql . We need to create a minimal directory structure to pass the environment check:

sudo mkdir /var/lib/mysql/mysql -p

That’s it! Now we can start the MySQL service:

sudo service mysqld start

Finally, let’s check that the new data directory is indeed in use:

mysql -u root -p

select @@datadir

Thanks for reading, I hope you found this useful!

Please follow and like us:
onpost_follow

Amazon Linux: iSCSI – Install and connect to your storage

iSCSI is a great way to attach enterprise class storage to your EC2 instances and it’s very easy to setup, even if you have zero storage management experience. The best part is that all of the steps are completed directly within your Linux instance itself.

Prerequisites:

  1. You’ll need some iSCSI storage within your VPC. This is really easy to deploy with ONTAP Cloud – grab yourself a free 30-day trial.
  2. An instance running in your VPC that you can SSH to.

Simply follow the steps below:

First, log into your instance via SSH/terminal (most use PuTTy for this).

Update your packages (Optional – but best practice):

sudo yum update -y

Next, we install iSCSI into our host:

sudo yum install iscsi-initiator-utils

We will now discover the iSCSi system (I’m using ONTAP in AWS, but you can use it in Azure, whitebox or on-prem). Simply replace 0.0.0.0 with your own IP address.

iscsiadm -m discovery -t st -p 0.0.0.0

Optional: Heres how to find your iSCSI IP address for ONTAP.  SSH to your ONTAP management IP (1.2.3.4 in this example):

ssh admin@1.2.3.4
network interface show
exit

Next, confirm that your host is seeing the iSCSI portal(s) correctly. they will be listed with the following command:

iscsiadm -m node

iSCSI requires your host to log in to the discovered portal, simply run the following command:

iscsiadm -m node --targetname "iqn.xxx.netapp:xxxxx" --portal "<ip-address:port>" --login

Optional but recommended: Restart the iSCSI service:

/etc/init.d/iscsi restart

Now, make a note of the initiator name on your host – this will come in handy later:

cat /etc/iscsi/initiatorname.iscsi

The next few steps are for ONTAP users, if you are using other storage please refer to their reference manuals:

Login to your ONTAP system, create a volume and LUN:

ssh admin@1.2.3.4
volume create -volume <myvolname> -aggregate aggr1 -size 1024GB -space-guarantee none
lun create -volume <myvolname> -lun <lunname> -ostype linux -space-reserve disabled -size 1024GB

iSCSI uses the concept of igroups in order to securely and logically share resources. We want our host iSCSI initiator to be part of a new igroup (for example I could create an igroup called mysql for all of my database instances). In this example my host initiator was iqn.1994-05.com.redhat:265535cef94:

igroup create -igroup mygroup -ostype linux
igroup add -igroup <myigroup> -initiator iqn.1994-05.com.redhat:265535cef94
lun map -volume <myvolname> -lun <lunname> -igroup <myigroup>

That’s it – your Linux instance should now see all LUNS that are available to the igroup for which it is a member. You can view these LUNs with:

fdisk -l

Now you have  a LUN you’ll need to format it, partition it and mount it:

fdisk /dev/sdb
mkfs.ext3 /dev/sdb1
mount /dev/sdb1 /mnt

You’ve ready to go!

 

ONTAP goodness:

If you happen to be running ONTAP, you can try instant cloning of your LUNs / databases /etc. Unlike AWS Snapshot clones, these are instant and have the full performance the minute they are created! Simply SSH into ONTAP and run:

volume clone create -parent-volume myvolume -flexclone myclonename

And just like that, you have created a clone of your data! Clones take up no additional space, are instantly available and can be mounted to any other (or the source) instance, great for speeding up development and saving money at the same time!

Please follow and like us:
onpost_follow