A blog about my life, development and projects

Hording data and data retention needs

This blog post will be about a topic that I haven't even thought about in my 13+ years of software development and technology. Hoarding - especially the data kind.

Image result for hoarding data

Tonight while eating my dinner I was watching a show on Netflix about hoarders and people collecting junk, which made me cringe. Over the years I have watched many such shows, without giving them much thought.
My house is always clean, free of clutter, I hate paper to start with and everything must have a place, I periodically throw away or sell unwanted stuff that I don't use.
This is not always possible, as you know "life happens", but I do try and because of this I have never given the hoarding topic much time.

Everyone that knows me will tell you that I have a thing against paper, or the use there of. In this day and age with cloud storage and connected everything the use of paper is quickly becoming a thing of the past.
Tonight I realized with a shock that this can easily lead to a different kind of problem, not just in our personal lives but corporate as well.

Lets first look at what is hoarding:

Compulsive hoarding, also known as hoarding disorder, is a pattern of behavior that is characterized by excessive acquisition and an inability or unwillingness to discard large quantities of objects that cover the living areas of the home and cause significant distress or impairment.
https://en.wikipedia.org/wiki/Compulsive_hoarding 

So basically hording comes down to collecting junk and not being able to get rid of it. I definitely don't have a problem there, but it triggered a thought about all the different kinds of hoarding, especially in the technology world and I realized that an excessive collection of data can also be considered hoarding.
After a simple search on the net I quickly realized the topic have been debated quite extensively.

Digital hoarding (also known as e-hoarding) is excessive acquisition and reluctance to delete electronic material no longer valuable to the user. The behavior includes the mass storage of digital artifacts and the retainment of unnecessary or irrelevant electronic data. The term is increasingly common in pop culture, used to describe the habitual characteristics of compulsive hoarding, but in cyberspace.
https://en.wikipedia.org/wiki/Digital_hoarding

With a horror shock I now know that I have fallen victim to data hoarding, and yes, it happens to all of us. I have a lot of old hard drives with countless backups over the years, even as far back as my school days, data lying on cloud services, data from way back when, and it goes on and on.
I won't classify this as a problem yet, as I haven't yet shown a reluctance to get rid of it, I simple haven't though about my personal data retention policy - we'll get to this in a bit. But the question everyone should ask themselves is "Do you really need that data from back in 2005?".

Hoarding data in business or corporations even now has a name, called Big Data, and people are making a living trying to give meaning to the endless amount of data that everyone, even businesses collect over the years.
It's perfectly understandable to store data because of legislation or local laws, such as storing medical or financial information for a number of years.

As more and more systems, people and businesses becomes connected and start to generate vast amounts of information it becomes more and more pressing to know what data you should keep, what data you need, and the data that is causing clutter.

In our personal lives we are generating so much data on social networks, chatting, texting, emails, digital photography and videos that losing track of it all is a real concern.
I for one definitely did and will start to put measures in place to not only delete data but to organize the data that I need to keep.

Data retention defines the policies of persistent data and records management for meeting legal and business data archival requirements; although sometimes interchangeable.
https://en.wikipedia.org/wiki/Data_retention

Now that we know what data retention means, we will need to define what we will store, why and then lastly a plan on how we would clean up our data.

My steps for a data rention policy looks like this:

  • Is the data a temporary record?
  • Does the data primarily consist of intellectual property?
  • Is the data a permanent record?
  • Have I needed or used the data in the last 3 years?
  • Is there a legal or contractual requirement to store the data?

My plan of action to deal with my data problems will be as follows:

  • Sort photos and videos accross cloud services, social media, delete duplicates, organize into albums and consolodate into one service.
  • Look at all hard drives lying around and delete data that I have not used in 3 years, or no need to keep then consolodate the data that I do need or use.
  • Consolodate all IP and code written to VSTS under the respective projects, including the code written for micro controllers and hobby electronics.
  • Sort and store business related data and properly backup or archive a single copy in accordance to contracts.
  • Securely erase data from redundant or old hard drives and physically throw away the drives.
  • Ensure that I have a backup strategy in place that works, for example using the 3-2-1 strategy. This means having 3 total copies of your data, 2 of which are local but on different mediums (read: devices), and at least 1 copy offsite.

Now that I have a plan I can start getting rid of my digital clutter, clean up my life, and get away from this data hoarding thing.

If anyone have interesting stories regarding data hoarding, please do leave me a comment, or send me a message?

New blog platform with more control

Dear readers

I am glad to announce that TechnoDezi have moved to a new blog platform. Same great site, more control.

You might ask why I would move to a new blog platform if the site still looks basically the same. Well the short answer is that the new platform has more control over the posts, the theme is easier to manipulate and I have the ability to extend it with more great features that is yet to come and hopefully bring additional coolness to this blog.

Some of the features are that my posts will be backed up. It allows files to be stored in the Azure cloud as it's fully integrated. The new blog will also at a later date support invoicing where my clients will be able to pay directly via TechnoDezi. This is in a drive to optimize how I do business and create a greater sense of trust.

Then in the months to come I will be starting my YouTube/Video channel, which I promised in April. The new blog platform will enable me to live stream directly on the blog via Azure media services. To those of you that doesn't speak geek, it just means that I will not be going via a 3rd party service, but keep the media and videos directly on my blog.

The Video channel have been delayed until I can find a suitable workshop with enough space to do recordings, but this is still very much on the top of my list.

Life update and response on digital propaganda

Part 1

Once again I find myself writing to you after many months have passed. Time goes by so quickly.
Many of you might have wondered where I went and what I am up to lately and this post is to give all my readers a quick update as well as to what is coming.

The last few months have been extremely busy and I found myself working a lot a overtime at a particular client trying to get everything done before the deadline - Such is the nature of IT. It is so easy to get caught up in the rat race, but if you are one of the sought after developers it’s easy for clients to want more, and the more you deliver the more is asked. It’s a wonderful feeling but you need to keep a balance and not get lost in the work you do, which I’m unfortunately very good at – getting lost in my work.

I have in the bit of spare time I had worked on a few exciting projects, one of them being a connected vending machine. This project is still on-going, but if I can I will share some of the things I learned especially with the communication between c# and Arduino. I have also started working on a facial recognition smart lock which I will post on YouTube in the coming months.

One of the biggest things coming for me is a personal brand revamp as well as launching my YouTube channel. That’s right, I am launching my YouTube channel officially next months, and I hope that it will be exciting and that I can bring a lot of the cool work I do to YouTube land.
I will focus on training, making gadgets as well as interviews with industry leaders in SA. If anyone want’s to nominate themselves, please drop me a mail or a Tweet.

I have thought a lot about what interests me, and by heart I am a maker. I love building things and solving problems and my YouTube channel will be a way for me to showcase and hopefully inspire young makers to do the same in SA.

Part 2

Now on to the second part of this post which is in response to an article on linked in written by my friend and collogue Rory titled ”Caught up in digital propaganda

“I had almost forgotten how important it is to not neglect the non-digital experiences as well.

I realized that, even in this digital age, we are still humans. And as human beings, we still perceive the world through our "analog" senses. We are still biologically wired up to see, hear, taste, smell and touch to understand the world and process experiences. With so many companies scrambling to "go digital", is becoming somewhat of a luxury.” ~ Rory

I found this article most interesting. We as humans are currently so focused on technology, automation and robotics that I think we are forgetting to be human. As Rory stated in the article we need to design our digital transformations around human experiences and not just automating everything but rather using it to enrich the human experience.

This article made me think of the movie WALL-E, especially the scene where all the humans are blobbing in front of the screens chatting, but no one is actually interacting with one another while the robots are going about doing everything including making a mess of things.

I am definitely inspired to change the way I approach my making especially my home automation to try and center it around human experience instead of trying to do everything automatically. Robots are a good things, but humans need to be happy too.

If we all can focus on this and not lose sight of the human experience I think technology in a few years will look much different that what we are currently seeing in sci-fi movies.

KooBoo Enterprise Framework & POCO Generator

To all my avid readers, I bring to you my Enterprise Framework and POCO generator. This is the next product in my suite of open cross platform apps that makes your life so much better.

The POCO generator is an open cross platform tool that can be used to generate your Complete data access entity classes in C# accompanied  by matching stored procedures. This is an alternative to the Microsoft Entity Framework that is geared towards large enterprise systems where control over your data access is needed.

The POCO generator by default generates the c# classes based on the KooBoo.Framework NuGet package that implements the Enterprise Library from Microsoft in a more easier to use way. The KooBoo Framework also supports Transient Fault handling as well as cross entity transaction scopes. This means that if you get timeouts while assessing the database the framework will automatically retry and if you are using the entities within a transaction scope, you can easily chain multiple entity calls together giving you the freedom of accessing your data through a structured manner.

The KooBoo Framework gives you more than just data access, it comes will a whole suite of functions aimed at making C# MVC web applications simple and easy to write, including multi tenant support (Contact Me).

Here are some of the features that comes with the Framework, for any assistance please Contact Me:

Data Access

Accessing data with the KooBoo Framework is one of the core benefits and also the motivator behind the POCO generator (Which will do this all for you). If you look at the default generation of the POCO tool, you'll notice a BusinessBase class that is inherited on all the classes, although you can turn it off, this helps to set up the data access globally for any entity as well as to provide the opportunity to run your entities within a transaction scope.

Setting up the SqlManager:

1 public abstract class BusinessBase 2 { 3 internal SqlDataManager sqlManager; 4 5 public BusinessBase() 6 { 7 sqlManager = new SqlDataManager("ConnectionStringName", null); 8 } 9 }

The SqlDataManager class supports two parameters as a constructor, the first will read the connection string from your web.config, the second parameter can be used to pass in the connection string manually.

Accessing Data:

1 DataTable dt = sqlManager.ExcecuteDataTable("SelectTable1Details", CommandType.StoredProcedure, new List<SqlParameter>() { 2 SQL.SQLParameter("PK", SqlDbType.Int, PK) 3 }); 4 5 if (dt.Rows.Count > 0) 6 { 7 PK = dt.GetDataCellValue(0, "PK").ToInt32(); 8 Name = dt.GetDataCellValue(0, "Name"); 9 10 }

Dealing with output parameters:

1 //Set output values 2 if (sqlManager.CurrentCommand != null) 3 { 4 if (sqlManager.CurrentCommand.Parameters["PKOut"].Value != DBNull.Value) 5 { 6 PK = (int)sqlManager.CurrentCommand.Parameters["PKOut"].Value; 7 } 8 }

Running Entities within a transaction scope:

1 Table1 t1 = new Table1(); 2 t1.sqlManager.BeginTransaction(); 3 4 try 5 { 6 t1.Name = "Insert 5"; 7 t1.InsertUpdateTable1(); 8 9 Table2 t2 = new Table2(); 10 t2.sqlManager.TransactionScope = t1.sqlManager.TransactionScope; 11 12 t2.Title = "Title insert 5"; 13 t2.InsertUpdateTable2(); 14 15 t1.sqlManager.CommitTransaction(); 16 } 17 catch(Exception ex) 18 { 19 t1.sqlManager.RollbackTransaction(); 20 }

By sharing the TransactionScope object of the first entity who initialised the transaction with the subsequent entities, you are able to chain them together in a transaction. When one call fails, the whole transaction will be rolled back.

Extension Methods

Part of the KooBoo Framework is a set of extension methods for various types that allows you to perform quick tasks without writing the code yourself.

DataTable – Select Duplicates:

1 DataTable dt = new DataTable(); 2 List<object> olist = dt.SelectDuplicates("ColumnName");

This will return a list of duplicate rows in the DataTable filtered on the column specified.

DataTable – Select Uniques:

1 DataTable dt = new DataTable(); 2 List<object> olist = dt.SelectUniques("ColumnName");

This will return a list of unique rows in the DataTable matched on the column specified

DataTable – Get row column value as string:

1 DataTable dt = new DataTable(); 2 string columnVal = dt.GetDataCellValue(0, "ColumnName");

DataTable – Add Column

1 DataTable dt = new DataTable(); 2 dt.AddColumn("My Column 2");

String – Convert string to Int32

1 string str1 = "1"; 2 int myInt = str1.ToInt32();

Int32 – Format an Int32 to 0 based value

1 int myInt = 123; 2 string formatted = myInt.FormatInt32(Functions.Int32Formatter.ZeroBased);

DateTime – Format a date into a readable string

1 DateTime myDate = DateTime.Now; 2 string formatted = myDate.FormatDate(Functions.DateFormatter.HourMin24);

DateTime – Difference between two dates

1 DateTime myDate1 = DateTime.Now; 2 DateTime myDate2 = DateTime.Now.AddMonths(30); 3 int years, months, days; 4 5 myDate1.DateDifference(myDate2, out years, out months, out days);

Imaging Functions

The imaging functions allows you to resize images from file or from a memory stream by specifying the width and height. This is ideal for creating thumbnails that are proportionally resized to fit into the dimensions specified.

1 ImagingFunctions img = new ImagingFunctions(); 2 MemoryStream stream = img.ResizeImagePreportional(blobStreamInput, 200, 322);

 

KooBoo.Framework NuGet package

You can add the KooBoo.Framework library to your project using the NuGet package.

1 PM> Install-Package KooBoo.Framework

https://www.nuget.org/packages/KooBoo.Framework/

 

KooBoo POCO Generator

The latest release of the KooBoo POCO generator can be downloaded from GitHub
https://github.com/TechnoDezi/KooBoo.Framework.POCO/

Have a look out for updated releases on both the Generator and the Framework for more features and enhancements.

JSON Api endpoint mocking and Proxy recording

NOTE: Moved to GitHub - https://github.com/TechnoDezi/Mock-KingBird

Have you ever wanted to create a quick mobile app, maybe to showcase a concept or to demo mobile possibilities to a client? Are you stuck on the creation of API endpoints for your app?

I have had a huge problem lately with mobile app development and the creation of test api endpoints that might or might not be used again. Creating api endpoints for a quick demo or showcase is a lot of work, especially if you aren't sure yet as to how the database might look or where the data will come from. Hard coding the data in the app is also out of the question since you want to always write production ready code.

I have found a few other similar tools but they are either very crude and command line based or they are in the cloud. Now some of the cloud based tools are great, but you cannot use them to proxy existing api calls and if you have sensitive client data such as banking apps you might need something that runs locally where you know your data is more secure or less exposed.

Over the weekend I have created an API mocking / stubbing tool that you can use to create api endpoints for your up-and-coming next-big-hit mobile app. No-fuss endpoint creation saves you from needing a database while you concentrate on the app itself.

With this tool, I call it "Mock-KingBird", you can create endpoints easily using only json and an idea of how the data should look. The tool can also proxy existing API calls and record the responses in order to create a baseline for you to work from and playback the recorded data while testing.

Mock-KingBird supports the following functionality at the time of writing:

  • Multiple projects
  • Api proxy and record
  • Proxy playback of recorded data
  • New endpoint creation
  • JSON editor and Viewer – so that you never have to leave the app

Endpoint Url's are just that … Url's. With Mock-KingBird there is no worrying about how it executes or where the request is handled - requests are simply matched on the url structure, request method and request data. If a request signature match recorded data, the response is returned to the client.

Because Mock-KingBird is written using Node-Webkit it supports multiple desktop platforms.

You can download the app here:
https://github.com/TechnoDezi/Mock-KingBird

Screenshots:

Change log:

  • v1.1.1
    • App Create
  • v1.1.2
    • Added json input text select on click for easy copy
    • Added full screen support and multi platform build
    • Fixed proxy response time
  • v1.1.3 
    • Added json editor
    • Improved stability
  • v1.1.4 
    • Fixed Json editor accept not saving 
    • Object matching instead of signature, more stable request matching
    • Implemented Copy & Paste for Mac
  • v1.1.5 
    • Added ability to lock endpoints from being overwritten during recording, esp for manual changes
    • Add error handling for json parsing on endpoints edit
  • v1.1.6 (Current release)
    •  New look & feel
    • Add full url to endpoint list title
    • Add dual mode proxy and playback – Playback recorded data and proxy what isn't found
  • (Planned) 
    • Project / Endpoint Templates
    • Proxy html / non JSON calls without recording
    • Export / Import of project or database
    • Dynamic lenient matching
    • Clone Endpoint & Project

If you experience any problems please leave me a comment and I will try to slot it into the backlog for the next release.
Updates will be posted here, so please check in soon.

This app is free to use but if it works for your or you would like to contribute to the continued development please make a donation.