Monthly Archives: March 2016

Downloading training material

We provide access to training materials via AWS S3 access, this requires a few additional steps compared to FTP sites but provides must faster , frequent and secure access. This shift from FTP to AWS was performed keeping in mind that most of our participants have bandwidth constraints and are not familiar with FTP access methods.

The below steps help users download all our content with minimal effort. We are always looking for feedback on how to improve the training experience so please feel free to leave your comments in the comments section.

Visit the link below to download the installer files:-

Step 1

Download and install the AWS command line libraries from the below link

https://s3-ap-southeast-1.amazonaws.com/enabledbusiness.trainings.public/AWSCLI64.msi

NOTE: – Restart your computer after this step

Step 2

Open the email from trainxl.com and download the batch file from the downloader. An example link is listed below

https://s3-ap-southeast-1.amazonaws.com/enabledbusiness.trainings.public/azure.bat

Note:- we recommended placing the batch file in its own folder and running command prompt within this folder context.

Step 3

Open the email from trainxl.com and note the Username and password details

Step 4

Open Command Prompt and drag and drop the batch file into the command prompt window, then enter the Username followed by space and then password as shown in the screen shot below

Step 5

Verify the files are downloaded successfully into C:\Enabledbusiness\XXX where XXX is the name of the training you attended.

Troubleshooting

Access denied error

Ensure there are no firewall/Proxy or other network access restrictions. Make sure the correct batch file has been downloaded. Verify the username and password being entered. Ensure the account running command prompt has permission to create files and folders on the hard disk.

Disk space issues

    Edit batch file in notepad and change the local path as desired.

Any other issues

    Contact the trainer by referring the trainer email in the CC box from the email you got from Trainxl.com

 

Permissions on Amazon S3 to create your own personal Dropbox – Step 2

In our previous post we explored the basics of the Amazon S3 Storage functionality and explored how to create a Bucket and upload and download files from the S3 Account. In this post we explore how to grant permissions to the S3 bucket. I use AWS S3 for a number of reasons such as storing backups, moving files from different accounts for SSIS ETL processes. The file upload and download speeds within AWS is super-fast and this comes in very handy when moving files across geographies. This is also used as a form of CDN in some cases. Combined with the uptime SLA provided there are a number of cases where S3 Makes sense for DBAs however all this is moot without security so here we explain the different ways to implement it.

The best and recommended way to assign permissions within AWS is the use IAM or Identity and Access management, which is visible as shown below after logging into the console.

Create a Sign in link

Once you visit this link there are a few steps to perform to create a user. First we will need a login page so that users can access the AWS console without having to register and create an AWS account. This link is created by default and can be found on the Dashboard main page. Note that I have created a link with my company name but you can do the same too by clicking the customize button to the right of the link (not shown here). This link needs to be passed on to users in order for them to be able to connect to AWS and access the S3 Account.

Create a User

To create a User you can click the Users Link on the left hand side menu as shown below, click the Create New Users Button:-

Next enter the username for the user that you would like to create, note the checkbox at the bottom this will create Access keys for the user. An Access key is a unique AlphaNumeric Key used by AWS to identify this user when calling RestAPIS or AWS Cli for automated processed like downloading files from a S3 bucket automatically. We will use this key later for automating uploads / downloads.

Press the Create button on the bottom right corner of the screen. On Successful creation you will see the below screen which shows the user key etc. make a note of this and don’t loose it.

Press the Close button the bottom right corner of the screen. Once you’re redirected back to the User screen you can see the user listed, click the username

On the below screen we can configure number of important properties for the users, make a note of the User ARN, this is used by AWS to uniquely identify a user within AWS. It will be required when granting individual permissions to a specific user. If you didn’t check the box in the previous screen you can create it by clicking the Create Access Key button as shown below. With the username created you need to assign a password to the use so, click Manage Password button. You can also use Multifactor authentication a paid feature in Azure when asking users to sign in to AWS. Additonal support exists for users logging in using a certificate.

Enter a new password and press Apply on the bottom right corner. With this we have finished creating the user.


How to create a S3 Admin Account where a user can manage just S3

On the user management page click the Permission tab and click the Attach Policy button. In this case we are granting permissions on the AWS Service level and not object level. A Service level is a permission granted at the S3 service level in this case. So the user can manage my S3 account and all buckets and objects (files within those buckets).

On the below screen type the name of the service you want to grant permission to see a list of allowed permissions check the permission you want to grant.

Press the attach policy button on the bottom right corner.

You can test the above permission by going to the url mentioned earlier in this post.

As you can see since the account is an admin it can upload / download files etc.

Note what happens when I try access another service to which I do not have permissions

How to create a Public S3 Bucket

Now occasionally you might want to have the S3 account made public so that anyone can upload or download files from the S3 bucket.

In this case you do not need to create users since the purpose is to not have users login and be able to access files anonymously. A common place where this is found is for images to be downloaded from email newsletters etc. But there are obviously other use cases as well. In this case you can simple click the bucket (for details on how to access bucket visit this previous post). In the below screenshot I create permissions so that users can download a doc file from my S3 Bucket. First Click the file then select properties and under permissions tab Add Everyone and add permissions to Open/Download. The link can then be passed to end users to allow them to download the file without having to login or creating users as shown in the previous steps.

Click Properties as shown in the top right corner of the screenshot and then Under the permissions tab click add more permissions and select Everyone and add permissions are required followed by Save.

In our next post we take security one step further by assigning permissions such that one user can access just one bucket and only download files from the bucket.

In our next post we explore how to download / Upload a large number of files into AWS S3 Storage using Command Prompt.

Using Amazon S3 to create your own personal Dropbox – Step 1

Amazon S3 is one of the most popular services provided by cloud provider Amazon. In a very basic way it’s a cloud based hard drive where you can upload/download files. Its equivalent service in Microsoft Azure is Azure Blob Storage. In this series of post we cover how to setup and configure the AWS S3 storage and ways to upload or download files from the site. If you are unfamiliar with AWS please visit he references section for details.

Note: – AWS offers a 1 year free trial for new users if you want to try it out.

Before we get started with creating an S3 bucket lets first evaluate if it’s worth our while. In order to get an estimate of the cost for using S3 accounts you can visit the below link.

https://calculator.s3.amazonaws.com/index.html

In the below screenshot we see that it cost 19$/month for 500 GB of standard storage (the same amount of storage using infrequent access storage cost 10$), we expect to upload or download up to 10000 files each month for a total size of 10GB/50GB each month.

This works out pretty cheap for most requirements, especially for use as a backup. An added bonus of using S3 is static webpages can be hosted on the s3 storage and pages are served from the account just like normal websites combine this with the fact that there are many more data centers around the world and you can have really low latency websites deployed.

Note: – AWS S3 storage is free for data upload and charges for download apply only after first 1 GB

    Very Important Note: – Choose your region carefully, the region plays a role in how the files are accessed and can’t be changed later. The region is the datacenter where the S3 Storage is provisioned, it dictates how fast the network transfer is for the end user and also the URLs for the blobs stored in the S3 account.

Creating an S3 Storage Bucket

Login to the AWS website and click Sign in to the Console

And Login

After logging in click S3 under Storage & Content Delivery, notice Glacier Storage its even cheaper than S3 (1 TB / $) but as its name suggests it’s for archive and has latency issues.

On the top of the page click Create Bucket, a bucket is a like a folder or in this case more like a Drive (D :\) into which you can upload the files

In the pop up below enter a name for the bucket, try to keep the name descriptive and using a format for logical ordering of related buckets. Make sure to check the Region as mentioned earlier.

If you would like detailed logs on who accessed and downloaded which files Click Set up Logging and then check the box that says Enabled.

Choose a different target Bucket, this will prevent people with access to the Original bucket from being able to view / Download the logs as well. The target prefix says the log files will saved in a folder called logs.

Press Create and your Bucket is ready.

Deleting a S3 Storage bucket

If you want to delete a Bucket click on the background next to the name of the bucket, (clicking the name will open the bucket) and then Click Delete Bucket from the Actions Menu on the top.

Browsing a S3 Storage Bucket

In order to browse the contents of an S3 bucket you can click the link on the bucket name

This will redirect you to the next page where the bucket contents will be listed.

Uploading a file into S3 Storage Bucket via Web portal

 

In order to upload a file into S3 storage bucket you can click the Upload button as shown below , in order to organize files it’s better to create a folder first by clicking the create storage bucket .

In the below popup, click Add Files to select and upload individual files, or Enable Enhanced Upload to upload multiple files and folders in one go (requires java).

The files that are going to be uploaded are listed above the Add Files Button. Once ready Click the Start upload button on the bottom right corner of the pop up. To begin uploading the files.

In the next post we are going to cover how to add permissions to the folder to secure them from unwanted access.

References

https://aws.amazon.com/s3/

Cannot get the column information from OLE DB provider

Cannot get the column information from OLE DB provider “Microsoft.Ace.OLEDB.12.0” for linked server “(null)”.

Encountered this little gem while performing a migration today. Turns out the SQL Server service account is used when establishing Linked Server Connection to a csv file and as a result the OLEDB connection fails with messages like could not read column information since user doesn’t have permissions or the error above.

The workaround (not fix) is to use a Local account instead of the NTService Managed account to startup MS SQL Server Services. A side not to the above issue is when the query ran the files could not be loaded the files were locked by the SQL Server process and wouldn’t open anymore. To find and kill the OS process that has locked the file you use Resource Monitor and under CPU there is the Associated Handles option where you can enter the file name.

References

http://www.aspsnippets.com/Articles/The-OLE-DB-provider-Microsoft.Ace.OLEDB.12.0-for-linked-server-null.aspx

http://stackoverflow.com/questions/3565218/how-to-know-what-process-is-using-a-given-file

 

What do indexes really look like?

We all have seen the traditional diagram that represents a B-Tree and I am sure we all resort to it when we try and explain how a B-Tree works. I know I have used it in my fair share of examples. Now while this diagram is great to explain the concept to novices it’s not always good at representing the order of the way things actually happen.

In this video I try and explain the structure of a B+tree

Creating, Configuring and Querying DocumentDB in Azure

DocumentDB is a NOSQL JSON based data store available in MS Azure. In this post we cover how to create a document database followed by a Collection and then we store documents into these collection. Once done we will querying the files using the query windows available directly within the portal.

First we visit the azure portal and Click new followed by

Search for and click DocumentDB

On the next blade press CREATE

On the below screen we need to give our DocumentDB account a unique name and specify the resource group it belongs to. This step can take a few minutes to complete.

Once the document database account is created we need to create a new database into which we will put our data. Click Add database in the below screen

Enter a name for the database and then press OK

Next we create for the collection as show below, click the database name from the list of database shown below within the DocumentDB account.

Click add Collection as shown below

Enter a name for the collection and then choose an appropriate pricing tier and indexing policy

The indexing policy determine the storage and performance characteristics of the collection, below are the two different options. When Done press OK on the above screen.

With collection ready we are now going to upload some JSON Documents into the collection. JSON like XML is a format that allows use to store data that isn’t strictly structured like row and columns in a table. This can be done from the Upload Button on the collections screen as shown below.

Click the upload document button

Then click the folder ellipse and select the JSON files to be uploaded

You can also manually create a file into the DocumentDB repository as shown below , click Create Document.

Then enter the JSON data into the file, every unique JSON file needs a id which can be used to reference it, much like the PK of a table.

Once the files are saved you can query files that meet your search criteria by querying it directly within the management portal as shown below

Enter your query in the query windows and press the Run Query button, in the next post we will cover some common types of queries that can be executed.

And now you have a DocumentDb in which JSON can be queried.

 

 

 

 

 

Adding additional Network interfaces in Azure

In this post we cover how to add an additional NIC card to our Azure VM. Most real world production environments have multiple NIC (Network cards) and quite a few applications depends on using dedicated NICs for scalability or even High Availability. In a previous post we covered how to add a VM to a Virtual Network now let’s assume that VM is being used for additional deployments and as a result we needed a new Network Interface Card for the same.

First visit the Azure portal and search for Network interface care in the search bar as shown below:-

Click the link for network interface card to see the below blade and then press Add.

In the below screen we can configure the detail for this NIC card such as the security rules (Network security group) the subnet and the Resource Group it belongs to.

Once the above details are entered press Create to provision the new NIC card.

The next step is to assign the NIC card to a VM, so we navigate to the VM we want to add this NIC Card to and then press All Settings.

NOTE:- Not all VMs images support multiple NIC cards. For details see the references link.

Under the general Tab press Network Interfaces

Identify the NIC you want to attach select it and the press Attach

References

https://azure.microsoft.com/en-in/documentation/articles/virtual-networks-multiple-nics/

Monitoring Billing in Azure by Resource Group

In our previous post we explored how to organize resources deployed in Azure. In this post we explore how to understand the impact of the resources that were deployed.

If you navigate to the Azure portal and click on the Resource group’s link on the top left a list of deployed resource groups will be visible.

Identify and click the resource group you want to work with as shown below:-

In the next blade you will see a list of resources that were deployed into this resource group.

Click settings to view a list of properties such as Resource cost, Deployments, Alerts etc.

E.g. when you click Deployments the blade lists all the recent changes made to this resource group such as adding a VM, adding a storage account etc. Similarly you can view estimated resource costs by click the Resource Cost link.

In our next post we explore how to configure Alerts against specific Resources like a VM for example.

 

 

The jumpy mouse problem solved using XE

I have a mouse (the real one that we connect to our laptops not the fake one that runs around after cheese :-p) that jumps all over my screen and every once in a while it lands right on the little X button on the top right corner and I end up clicking before I know what happened. This might seem like a silly problem to have but it’s a pain when I write queries in SQL Server coz the immediate next thing I do is press NO as soon as I see a pop up.

In short I end up losing my queries. Another problem I face is I often write a ton of queries on SSMS without realizing how useful they would be later on for some other requirement. To solve these issues it seems the best solution would be to save every query as soon it executed. This way, at a later date I can revisit the code. Now we already know that this can be achieved using a trace and that a trace can collect a lot more info. But since I was running this on my laptop and that I wanted a light weight way of achieving this I decided to go with XE or Extended Events. The problem with using XE is querying the data afterwards so in this post I am going to show you how I set up your very own query tracker and a procedure to query these XE files.

First off I create a folder in the logs folder for my XE output.

Next I create an XE session to capture the SQL_TEXT every time a SP or a query completes.

CREATE EVENT SESSION [Querytracker] ON SERVER 
ADD EVENT sqlserver.sp_statement_completed(
    ACTION(package0.collect_system_time,package0.event_sequence,sqlserver.client_app_name,sqlserver.sql_text)
    WHERE ([package0].[greater_than_equal_uint64]([sqlserver].[database_id],(7)))),
ADD EVENT sqlserver.sql_statement_completed(SET collect_statement=(1)
    ACTION(package0.collect_system_time,package0.event_sequence,sqlserver.client_app_name,sqlserver.sql_text)
    WHERE ([package0]. [greater_than_equal_uint64]([sqlserver].[database_id],(7)))) 
ADD TARGET package0.event_file(SET filename=N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\QueryList\XE_QueryTracker.xel')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO

Notice here I am collecting the events sqlserver.sp_statement_completed and sqlserver.sql_statement_completed and only for databases greater then 7, you can add additional filters for SSMS etc. if needed.

The output of the XE is saved in the file location shown above. Next I needed a way to query the XE output file to extract only the timestamp and the actual query.

ALTER PROCEDURE GetRecentQueries @starttime datetime2 ,  @endtime datetime2 , @searchphrase VARCHAR(100)
AS
 
SELECT  DATEADD(mi,330,eve_data.VALUE('(//event/@timestamp)[1]','datetime2')) AS [TIMESTAMP],
		eve_data.VALUE('(//event/data[@name="statement"]/value)[1]','varchar(2000)') AS [SQL_Text]
 FROM 
	(SELECT  
		CAST(event_data AS xml) AS Eve_Data 
	FROM sys.fn_xe_file_target_read_file('C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\QueryList\*.xel',null,null,null)) xed
 WHERE eve_data.VALUE('(//event/data[@name="statement"]/value)[1]','varchar(2000)') like '%'+@searchphrase+'%' 
 AND  DATEADD(mi,330,eve_data.VALUE('(//event/@timestamp)[1]','datetime2')) between @starttime and @endtime

 

And that’s it, we can now simply search for a phrase or table within the XE output and retrieve all matching queries.

PS: – In the time that it took me to write this post I have already used this approach once 😉

Creating a Resource Group in Azure

A resource group is a nice little feature that helps Azure Administrators stay organized when managing a large scale deployment. In its simplest sense it behaves like folders on your hard disk, it helps related resources stay together and be addresses by a common name. If a developer is working for multiple projects he might create a resource group for each project and then launch services within these resource groups. By doing so he automatically has the ability to view the history of deployments of services to this group as well as track the cost of each group separately. In most cases the developer is asked if he wants to add the service to a Resource group or create a new one. In this case we are going to create an empty resource group which we will use later.

To get started simply search Resource group on the left hand side options of the Azure Management portal.

Click Add as shown below

Give the resource Group a Name, try and use something that identifies the boundary of the resources properly like Prod_CompanyWebSite etc. and press Create

Once the resource group is created you can click Add and simply launch new services into it.

References

Configuring a Network Security group

Creating a Virtual machine In Azure

Creating a Virtual Network in Azure

Creating a Web App Role in Azure

Creating a Public IP in Azure

Creating a database in Azure

Setting up Windows Authentication with Azure databases

Upgrading Azure database to V12

Exporting data from Azure databases

Querying Azure databases from the management portal

Configuring database firewall in Azure

Creating Azure database server and database