tag:blogger.com,1999:blog-10305435255165705812018-05-19T03:43:06.040-04:00Simple Data MiningBlog posts that describe complicated data mining techniques and topics in simple English for people that aren't pursuing a PhD in data mining.Jed Isomnoreply@blogger.comBlogger16125tag:blogger.com,1999:blog-1030543525516570581.post-51475001276828999312015-05-02T16:16:00.002-04:002015-05-02T16:16:41.835-04:00How to Deal with Mixed Data Types when Performing K-means ClusteringWhen we look at a group of dots on a page, most of us can intuitively put these dots into groups that appear to "belong" together because they seem to be close to each other. In data mining we call these clusters. Clustering with the human eye works pretty well for problems in 2 or sometimes 3 dimension. As an example, if you were to look at the chart below how would you cluster/group the points together?<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BdDudVeZ7_M/VUOVlQVxYxI/AAAAAAAACW8/xUINKHSb8zk/s1600/2D%2BCluster%2BSpace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-BdDudVeZ7_M/VUOVlQVxYxI/AAAAAAAACW8/xUINKHSb8zk/s1600/2D%2BCluster%2BSpace.jpg" height="276" width="400" /></a></div><br />I think that most of us would group the 3 points in the upper left corner together, and then group the points on the right side together. Or maybe you'd group the upper left together, and then create 2 groups on the right side (one for the upper 2 dots, and one for the lower 2). As you can see this art and practice are not really definitive, but there is a lot of information to be gained from data sets by grouping data points together. Therefore, data mining seeks to make this process of finding the best clusters objective so that we can perform this analysis on multi-dimensional data with little to no human interaction.<br /><br />Sometimes our data does not just have numerical data. We may have to deal with data that has categories like male/female, ethnical background, etc. Or, maybe our data set has ordered categories (ordinal data) like t-shirt size (small, medium, large, extra large), or relative temperature (cold, warm, and hot). We would like to still be able to cluster our data points, even though all of the data types don't fit the same cookie cutter mould. To explain this, I'm going to start by showing a simple example of the k-means algorithm with just numerical data. Then, we'll add in a category and an ordinal data type to see how we can perform the same process with different data types.<br /><br />Since we already have a simple 2-dimensional set of data points above, we'll stick with that example and use the k-means algorithm to solve for the best clusters. The 'k' in k-means is a user defined variable that controls how many clusters we're going to look for. To keep things simply we'll set k=2, which means that when we're done, we'll have the data points in the example split and assigned to 2 groups.<br /><br />To mathematically define these 2 clusters, k-means uses 2 points in space that aren't on the graph yet. These points are called cluster centers, or centroids. Think of a centroid as the center of gravity of all of the points assigned to it. When we're all done, we want the distance between these centroids and the points assigned to them to be minimized. If we do that, we'll get centroids that , have data points tightly packed around them, which is intuitively what we are looking for when we are doing this by eye. The original k-means algorithm starts by randomly picking values for these cluster centers and then iteratively improves their position to minimize the distance as described above. For our example, I have randomly placed 2 cluster centers (red and green). These cluster centers are also referred to as seeds. The red seed is located at (2.2087,4.1076) and the green seed is located at (4.8579, 0.3341).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-TMPn2HyFwto/VUOaqaVillI/AAAAAAAACXQ/CgOQGYu3c1w/s1600/2D%2BClusters%2Bwith%2BSeeds.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-TMPn2HyFwto/VUOaqaVillI/AAAAAAAACXQ/CgOQGYu3c1w/s1600/2D%2BClusters%2Bwith%2BSeeds.jpg" height="276" width="400" /></a></div><br /><br />The question is how we will define "distance" from the data points to these seeds. There are multiple ways to define the distance between 2 points. To visualize a couple of them, let's say that we have randomly picked two cluster centers (in red and green below) and we're trying to calculate the distance to one of the other blue data points.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-xE8dotYPlq4/VUOnsHsu5GI/AAAAAAAACXs/36qFagBvsSA/s1600/2D%2BCluster%2Bshowing%2Bdistance%2Bmetrics.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-xE8dotYPlq4/VUOnsHsu5GI/AAAAAAAACXs/36qFagBvsSA/s1600/2D%2BCluster%2Bshowing%2Bdistance%2Bmetrics.jpg" height="276" width="400" /></a></div><br />A very simplistic way of calculating the distance between 2 data points is to use the "Manhattan" distance. Think of this as how far you would have to walk to get somewhere in Manhattan, New York where it's all city blocks and you can't just jump over or walk through buildings. This is also called the L1 norm. I described how to calculate this in my <a href="http://simpledatamining.blogspot.ca/2015/04/simple-pagerank-algorithm-description.html" target="_blank">PageRank blog post</a>. If you're curious about it you can learn more there.<br /><br />Another way of calculating distance is called the Euclidean distance. This is like the saying "the shortest distance between 2 points is a straight line". To calculate this distance we use the square root of the sum of the squares. If we have 2 points in space where point 1 = (x1,y1) and point 2 = (x2,y2), then the Euclidean distance between them looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ExzZrWgWKbw/VUOor2ShAgI/AAAAAAAACX0/_R2EaK4CzTY/s1600/Euclidean%2Bdistance%2Bformula.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ExzZrWgWKbw/VUOor2ShAgI/AAAAAAAACX0/_R2EaK4CzTY/s1600/Euclidean%2Bdistance%2Bformula.jpg" height="57" width="400" /></a></div><br />There are actually several other distance metrics we could use, but since it is fairly simple and it was the one used in the original paper on k-means, we'll stick with the Euclidean distance in our example.<br /><br />Now that we know how we're going to calculate distance and we know where our seeds are located we can start the iterative process of finding the best location for the centroids. The process goes like this<br /><br /><ol><li>Calculate the distances between all of the data points and all of the cluster centers</li><li>Assign each data point to the cluster center closest to it</li><li>Calculate the average x and y values for the points assigned to each clusted center</li><li>Set each cluster center's location to be equal to the average of the points assigned to it</li><li>Check to see if any data points have been assigned to a different cluster center</li><ol><li>If so, go back to the top and repeat</li><li>If not, then end the algorithm, you've got your answer</li></ol></ol><br />If you didn't follow that, it will become clear as we work through the example. For step 1, I'll show you a couple of the calculations by hand and then jump to a table of all of the distances. If we take the data point at (1,5), the distance from that point to the red seed at (2.2087,4.1076) is<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-lZu7tpUA_lc/VUOt-Z0ANUI/AAAAAAAACYc/Om1fdPBLOyw/s1600/First%2B2%2Bdistance%2Bcalculations.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-lZu7tpUA_lc/VUOt-Z0ANUI/AAAAAAAACYc/Om1fdPBLOyw/s1600/First%2B2%2Bdistance%2Bcalculations.jpg" height="124" width="640" /></a></div><br />You can see that I also gave you the answer for how to calculate the distance from (1,5) to the green cluster center (4.8579, 0.3341) in the image above. At this point, I'm going to assume that we can all replicate this until we're sick of it, so I'll just give you the rest of the answers in this table<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-bhsQajMMfdc/VUOumetrlUI/AAAAAAAACYs/abk8uS7pjfE/s1600/First%2BIteration%2BDistances.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-bhsQajMMfdc/VUOumetrlUI/AAAAAAAACYs/abk8uS7pjfE/s1600/First%2BIteration%2BDistances.jpg" height="233" width="400" /></a></div><br />Now that we have that completed we can assign the points to their nearest cluster center. The following points get assigned to the red cluster center: (2,3); (1,5); (3,5); (7,5). All the rest get assigned to the green cluster center.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-nKrpjYjU2Cc/VUOw3cUiSbI/AAAAAAAACY8/4Nl5EW1qxxE/s1600/2D%2BCluster%2BInitial%2BAssignments.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-nKrpjYjU2Cc/VUOw3cUiSbI/AAAAAAAACY8/4Nl5EW1qxxE/s1600/2D%2BCluster%2BInitial%2BAssignments.jpg" height="276" width="400" /></a></div><br /><br />Now we need to take the average values of the point assigned to each cluster center. For the red center, the average X value is (2+1+3+7)/4 = 3.25 and the average Y value is (3+5+5+5)/3 = 4.5. For the green cluster, the average X value is (9+8+7)/3 = 8 and the average Y value is (1+4+2)/3 = 2.3333. Now we assign these average values to be the new locations of the 2 cluster centers. So the new location of the red cluster center is (3.25, 4.5) and the new location of the green cluster center is (8, 2.3333).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-wfsvoBWizs8/VUOxDSS9lKI/AAAAAAAACZE/6p5L8dcrGxo/s1600/2D%2BCluster%2BMove%2Bto%2B2nd%2Bcentroid%2Bposition.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wfsvoBWizs8/VUOxDSS9lKI/AAAAAAAACZE/6p5L8dcrGxo/s1600/2D%2BCluster%2BMove%2Bto%2B2nd%2Bcentroid%2Bposition.jpg" height="276" width="400" /></a></div><br />Since this is the first iteration, it's obvious that we have changed the assignments of the data points to the cluster centers (they were previously unassigned). So, we have to repeat the process. With our new cluster centers of (3.25, 4.5) and (8, 2.3333), the distances between the points and the cluster centers looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-uP-AE5mTqA4/VUOxZLFMgzI/AAAAAAAACZM/gPnGhy4RRdc/s1600/Second%2BIteration%2BDistances.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-uP-AE5mTqA4/VUOxZLFMgzI/AAAAAAAACZM/gPnGhy4RRdc/s1600/Second%2BIteration%2BDistances.jpg" height="232" width="400" /></a></div><br />When you look to see which cluster is closest to each data point, you can see that some of the assignments are going to change. The data point at (7,5) is now closer to the green cluster center than the red cluster center.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-LHXLrEYJQUM/VUOyPizIJQI/AAAAAAAACZY/np2k9WzM73Q/s1600/2D%2BCluster%2BSecond%2BAssignment.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-LHXLrEYJQUM/VUOyPizIJQI/AAAAAAAACZY/np2k9WzM73Q/s1600/2D%2BCluster%2BSecond%2BAssignment.jpg" height="276" width="400" /></a></div><br />Now we take the average values of all of the data points assigned to red and green. The new location of the red center is (2,4.3333) and the new location of the green center is (7.75, 3).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-EfJvWUM8FlA/VUOyvYUfiSI/AAAAAAAACZg/5pPfDJ7poxg/s1600/2D%2BCluster%2B3rd%2BMove%2Band%2BAssignment.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-EfJvWUM8FlA/VUOyvYUfiSI/AAAAAAAACZg/5pPfDJ7poxg/s1600/2D%2BCluster%2B3rd%2BMove%2Band%2BAssignment.jpg" height="276" width="400" /></a></div><br />If you do the distance calculations again and assign the centers, as we have in prior iterations, you'll find that none of the data points get assigned to a different center even though the location of the center moved a little bit. We now have our final answer. The red cluster center is located at (2, 4.3333) and points (1,7), (2,3) and (3,7) are assigned to the red cluster. The green cluster center is located at (7.75,3) an points (7,2), (7,5), (8,4), and (9,1) are assigned to the green cluster.<br /><br />In a nutshell, that's how k-means works. It can be expanded to many many dimensions using similar calculations. For 'k' different dimensions the Euclidean distance calculation looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-u-tj5QHQiag/VUO0uR_yHyI/AAAAAAAACZs/KXA-PdR42fU/s1600/Multi-Dimensional%2BEuclidean%2BFormula.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-u-tj5QHQiag/VUO0uR_yHyI/AAAAAAAACZs/KXA-PdR42fU/s1600/Multi-Dimensional%2BEuclidean%2BFormula.jpg" height="65" width="640" /></a></div><br />Don't get confused by the letters in this formula. You just keep adding squares of differences for each dimension you're trying to compute the distance for. So if you're trying to calculate the distance between 2 points in 4 dimensional space like (1,2,3,4) and (10,9,8,7), you would get<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-_LX_eG1cPbM/VUO2m1ZHETI/AAAAAAAACZ4/MbZnCs7XCeg/s1600/4%2Bdimensional%2Bdistance%2Bcalculation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-_LX_eG1cPbM/VUO2m1ZHETI/AAAAAAAACZ4/MbZnCs7XCeg/s1600/4%2Bdimensional%2Bdistance%2Bcalculation.jpg" height="49" width="640" /></a></div>Once you adapt this to your problem. The rest of the algorithm is basically the same. <br /><br />The selection of the seed cluster centers is also very important. As I mentioned, I randomly picked the values in this example. If we randomly pick different values, depending on the "shape" and structure of the data, we can get different answers. The originator of the algorithm suggested that you just run the algorithm several times with different random starting values and then pick the answer that minimizes the total distance between cluster centers and their assigned data points. The k-means++ algorithm improves on this, but that's not part of the scope of this post.<br /><br />The last thing I need to tell you before we change topics is what you need to do if for some reason no data points get assigned to one of your cluster centers. If this happens, basically, the cluster center is too far away from the data. One simple way to deal with this is just to randomly assign a new location to that cluster center again and continue the algorithm. This might make the algorithm wander around a little more looking for the solution, but it's better than only getting 1 cluster center when you were looking for 2.<br /><br /><b><u>Dealing with Different Data Types</u></b><br />Now what do you do if your data set looks like this?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3ThagpmRBUY/VUO4rKgkuJI/AAAAAAAACaE/b2uReotgSJA/s1600/Data%2Bwith%2Bdifferent%2Btypes.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-3ThagpmRBUY/VUO4rKgkuJI/AAAAAAAACaE/b2uReotgSJA/s1600/Data%2Bwith%2Bdifferent%2Btypes.jpg" height="200" width="400" /></a></div><br />Before you answer "uuuhh... give up?" read the rest of this post.<br /><br />I know that there are some of you out there that thought "white and black aren't colors; they're shades" when you looked at the table above. OK Mr. Smarty Pants, you're right but let it go for now.<br /><br />The most important part of adding in different data types is figuring out a way to measure "distance" between these variables. We'll start by describing how this can be done with the size category above. Since size has a sort of intuitive order to it where medium is bigger than small and large is bigger than medium, we can create a pseudo-distance fairly easily. the easiest way to think of this is to just make the lowest value ("small" in our case) equal to 0 and the highest value ("large") equal to 1. Then we just divide up the space between 0 and 1 equally to fit in any other values we have in our list. Since we only have 3 values in our example, this is easy and we set "medium" to 0.5. If you have a longer list of values, you can use this general equation to assign values to your labels between 0 and 1.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-08a49x3rSv4/VUTt3PI1anI/AAAAAAAACaY/Ax_JwuSXguE/s1600/Ordinal%2BData%2BValue%2BConversion%2BEquation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-08a49x3rSv4/VUTt3PI1anI/AAAAAAAACaY/Ax_JwuSXguE/s1600/Ordinal%2BData%2BValue%2BConversion%2BEquation.jpg" /></a></div><br />In this equation, the Zi is a pseudo-location like an X or Y value. The ri term is the rank of the label in the list you're trying to convert, and M is the total number of labels in the list. So, for our example, M=3 and if I'm trying to calculate Zi for "medium" then ri is equal to 2, ri for "small" would be equal to 1, and 3 for "large". Take some time to plug those numbers into the equation above to make sure you can see how this gives you the values we already intuitively assigned in the paragraph above.<br /><br />Now that we have these "location" values for size, we can treat the size variable just like the X and Y variables. Although, we'll have to modify X and Y soon too, but there'll be more on that later.<br /><br />Now how do we deal with the pesky color value? Some of you out there may think that color could be done the same way as size because we can use the light freqency spectrum values for order. Let's just assume instead, for demonstration purposes, that we are thinking the colors are from a color wheel with no beginning and end. That means we have to treat them like categories. For categories, we don't transform the values ("white", "red", or "black") into numbers, but we use a different distance calculation. We say that if 2 points have the same color, the category distance between them is 0. If the 2 points have different colors, the category distance between them is 1. So the distance between "white" and "white" is 0, between "white" and "red" is 1, and between "white" and "black" is 1 too. You can see that it is more difficult to be "close" to a point here if there are multiple values to choose from in a category.<br /><br />You may have noticed that our transformations of size and color both have a scale from 0 to 1. If we just leave the values of X and Y the way they are, their distances will appear much larger than the distances for size and color just because of their scale. To solve this problem we can scale X and Y down to something close to 0 to 1. Once we do this all of the different variables (X, Y, size, color) will all get roughly the same distance weighting. A simple way to scale numerical variables like X and Y is to compute the Z-score. The Z-score will likely look very familiar to you if you've been through a statistics course.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-5ANmtBec3ZM/VUTxludMeZI/AAAAAAAACak/S2RenPJKjpk/s1600/Z-score.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5ANmtBec3ZM/VUTxludMeZI/AAAAAAAACak/S2RenPJKjpk/s1600/Z-score.jpg" /></a></div><br />The Z will replace our X and Y values in our table above after this transformation. The x, in the equation is a specific value of X or Y in the table above. Mu (<span style="font-family: 'Cambria Math';">μ</span>) is the average (mean) value of X or Y, and sigma (<span style="font-family: 'Cambria Math';">σ</span>) is the standard deviation of X or Y. For example, If I want to transform the values in the Y column, I need the average (1+4+2+3+5+5+5)/7 = 3.5714..., and the standard deviation which equals 1.6183... (if you want more understanding of how to calculate a standard deviation you can easily find this on Wikipedia <a href="http://en.wikipedia.org/wiki/Standard_deviation" target="_blank">here</a>). Now that I have those 2 values, I'll calculate the first couple Z-scores for the Y variable.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-rY0703OQedM/VUT0h34AAjI/AAAAAAAACaw/5GnslfMdBKE/s1600/Calculating%2Bsome%2BZ-scores.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-rY0703OQedM/VUT0h34AAjI/AAAAAAAACaw/5GnslfMdBKE/s1600/Calculating%2Bsome%2BZ-scores.jpg" /></a></div><br /><br />If we complete all of these calculations, we can transform our table into something that we can use to measure distance with all of these different data types.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-p_t3bybsf74/VUT1zKKSuWI/AAAAAAAACa8/YMvbmTha33E/s1600/Normalized%2BPoint%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-p_t3bybsf74/VUT1zKKSuWI/AAAAAAAACa8/YMvbmTha33E/s1600/Normalized%2BPoint%2BTable.jpg" height="200" width="400" /></a></div><br />Now that we've adjusted all of the variables, we're ready to introduce the mixed variable type distance equation. It works pretty much like the Manhattan distance described earlier.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-w_4F-MKL-FE/VUT5Pjm-1gI/AAAAAAAACbI/WOXs7rFGaPg/s1600/Distance%2BFormulat%2Bfor%2BDifferent%2BData%2BTypes.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-w_4F-MKL-FE/VUT5Pjm-1gI/AAAAAAAACbI/WOXs7rFGaPg/s1600/Distance%2BFormulat%2Bfor%2BDifferent%2BData%2BTypes.jpg" /></a></div><br />The first time I saw this I just about fell over from all of the subscripts and superscripts in this equation. It looks WAY more complicated than it actually is. Thing of it as a weighted average of distance. The Dij on the left is the distance between points i and j (e.g. point #'s 1 and 5 in our table above). The summations on the top and bottom are just summing up over all of the attributes/variables we are analyzing in our table. So P would be equal to 4 because we have 4 attributes (Zx, Zy, Zsize, and Color). The Wij terms are user selected weights that can be used to push the clustering to consider some attributes to be more important than others. In our example, we'll set these equal to 1 for simplicity. Once we do that, the formula is really just an average (mean) Manhattan distance of all of the attributes in our table. If you didn't follow that, you'll see what I mean when we find the new cluster centers for our example.<br /><br />If I plot the Zx and Zy variables on chart (I can't plot 4 dimensions and 3 dimensions is a little tricky) it would look like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-3z283xkUohs/VUT9VR4eh3I/AAAAAAAACbU/_RjZ3TZFe8I/s1600/Mixed%2BData%2BDataspace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-3z283xkUohs/VUT9VR4eh3I/AAAAAAAACbU/_RjZ3TZFe8I/s1600/Mixed%2BData%2BDataspace.jpg" height="305" width="400" /></a></div><br />You can see that it is not so easy to visually identify where the cluster centers are going to end up. This is why we solve this with math instead of by eye. To start, just like last time, we randomly select new seed cluster centers and then calculate the distances from all of the data points to these cluster centers. Suppose that I pick the seed locations to be (-0.614, 0.107, 0.5, "red") and (-0.160, -0.116, 1, "black"). I can plot these on the chart like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-eab66cBD06E/VUUADRtO3nI/AAAAAAAACbg/jAd2UuvpW7I/s1600/Mixed%2Bdata%2Bspace%2Bwith%2Bseeds.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-eab66cBD06E/VUUADRtO3nI/AAAAAAAACbg/jAd2UuvpW7I/s1600/Mixed%2Bdata%2Bspace%2Bwith%2Bseeds.jpg" height="305" width="400" /></a></div><br />Notice that I changed the size labels from words to their coded number values in this version of the graph. The seeds I "randomly" selected have values for Zsize that matched exactly with "medium" and "large", but there is no restriction that ordinal attribute seeds match one of the values in the series. I could have just as easily picked a value like 0.33356. Of course the category attribute values have to be selected from the available options ("white", "red" or "black). Now we just need to compute some distances. Like last time, I'll show you how to calculate the distances from the 2 seeds to the point in the top left corner, and then give the rest of the answers.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-zWStfPuuJ30/VUUDelBRCsI/AAAAAAAACbs/axtL3eRQgFo/s1600/Examples%2Bof%2Bmixed%2Bdistance%2Bcalculations.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-zWStfPuuJ30/VUUDelBRCsI/AAAAAAAACbs/axtL3eRQgFo/s1600/Examples%2Bof%2Bmixed%2Bdistance%2Bcalculations.jpg" height="113" width="640" /></a></div><br />Notice that I am using absolute values for the distance calculations. When you use the Euclidean distance you don't need to worry about this because a squared number will always be positive. The other thing you need to notice is that for the last term in the numerator, I just did the color category distance comparison in my head and put it in there. The first seed (the red diamond...don't get confused here with the colors of the cluster centers and the labels) had its color label equal to "red" and the data point in the top left is "red" too, so the color distance between them is 0. For the green seed, it was labelled "black" so the color distance between it and the top left data point is 1. If you continue to calculate the distances between the cluster centers and all of the data points, you'll end up with a table that looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Il-ImdLeObw/VUUFXlPVt0I/AAAAAAAACb4/zTMH3rSm3LA/s1600/Mixed%2BFirst%2BIteration%2BDistances.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Il-ImdLeObw/VUUFXlPVt0I/AAAAAAAACb4/zTMH3rSm3LA/s1600/Mixed%2BFirst%2BIteration%2BDistances.jpg" height="211" width="640" /></a></div><br />I highlighted in green the distances that are lower for each data point. We'll use these lower distances to assign each point to a cluster center. So points 1, 2, 4, and 6 are assigned to the green cluster center and points 3, 5, and 7 are assigned to the red cluster center. Now, we have to calculate the new cluster centers based on the means of all of the attributes. We do this the exact same way we did it before, except for the color attribute. Zx, Zy, and Zsize are all now numerical and we just calculate the mean of the values assigned to each cluster. For the color attribute, we don't actually take the mean. Instead we take the mode. If you don't remember what a mode is, it's just the value that comes up most often in the data. So for the red cluster (points 3,5,and 7) we have have 2 points that are "red" and 1 point that is "white". So we'll make the new cluster center point "red". Similarly, the green cluster center has points that are "white", "black", "black", and "black". So the new green cluster center is assigned to be "black". There is a possibility that there is tie between 2 values. If that ever happens the easiest thing to do is just to create an arbitrary tie-breaker rule like "always prefer white over red, and prefer red over black". This rule will create some bias in the results, but in a large data set, this effect will happen pretty infrequently so it is not much of a concern. If we do all of the mean calculations and assign the new colors to the cluster centers we get new cluster centers that look like the last 2 rows of the table below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-JTbNym5tOSY/VUUZ_y7jVMI/AAAAAAAACcI/qUtFsCFypI8/s1600/2nd%2BMixed%2BCluster%2BCenters.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-JTbNym5tOSY/VUUZ_y7jVMI/AAAAAAAACcI/qUtFsCFypI8/s1600/2nd%2BMixed%2BCluster%2BCenters.jpg" height="250" width="400" /></a></div><br />If we plot this on the graph, we can see the shift in the location of the cluster centers.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-KoYNa_L0jgM/VUUbFAQy7PI/AAAAAAAACcQ/nbCdtpZ-g0Y/s1600/Mixed%2BDataspace%2Bafter%2B1st%2BIteration.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-KoYNa_L0jgM/VUUbFAQy7PI/AAAAAAAACcQ/nbCdtpZ-g0Y/s1600/Mixed%2BDataspace%2Bafter%2B1st%2BIteration.jpg" height="305" width="400" /></a></div><br />If we go through the same distance calculation process as before we see the distances come out like this<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-hVsafSfQ3t0/VUUgklAyd8I/AAAAAAAACcw/SAMlYAZPl6s/s1600/Mixed%2BData%2Bafter%2B2nd%2BIteration.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-hVsafSfQ3t0/VUUgklAyd8I/AAAAAAAACcw/SAMlYAZPl6s/s1600/Mixed%2BData%2Bafter%2B2nd%2BIteration.jpg" height="211" width="640" /></a></div><br />You may notice that although the distances are different, the assignments are the same. The red cluster center is still closest to points 3, 5, and 7 and the green cluster center is closest to points 1, 2, 4, and 6. That's our signal to stop the algorithm and give the final assignments. They look like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-N6t06YoFBp8/VUUizZ2sgUI/AAAAAAAACc8/DYcTvQqbczQ/s1600/Final%2BAnswer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-N6t06YoFBp8/VUUizZ2sgUI/AAAAAAAACc8/DYcTvQqbczQ/s1600/Final%2BAnswer.jpg" height="488" width="640" /></a></div><br />In 2D space this final answer to the clustering problem doesn't look so awesome, but as far as I can tell with my limited 4-dimensional vision it's the optimal solution with the seeds we started with. The total distance from the cluster centers to the assigned data points can be calculated from the distances in the table above. Red distances = 0.465 + 0.592 + 0.686 = 1.743 Green distances = 0.996 + 0.311 + 0.437 + 0.591 = 2.335. The total distance is 4.078. As mentioned, we have no guarantee that this is the absolute best way to cluster these data points. I used Excel to run this clustering hundreds of times and found an optimal solution whose total distance was only 3.229. The clustering for that solution looks like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-YzNGJW0D1VU/VUUpQ-o5EGI/AAAAAAAACdM/BVrUWhEB2p0/s1600/Final%2BOptimal%2BAnswer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-YzNGJW0D1VU/VUUpQ-o5EGI/AAAAAAAACdM/BVrUWhEB2p0/s1600/Final%2BOptimal%2BAnswer.jpg" height="488" width="640" /></a></div><br />This looks a lot like the solution for the non categorical problem. It seems that the clusters we found in that example are still pretty significant in this example.<br /><br />That summarizes how to do K-means clustering with numerical and non-numerical data. I hope this has helped somebody out there. Let me know if you have any comments or questions about anything in this post. Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-72892323716600605322015-04-25T16:54:00.001-04:002018-05-18T14:57:46.459-04:00How to Mine Frequent Patterns in Graphs with gSpan including a Walk-thru Example<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><div dir="ltr" style="text-align: left;" trbidi="on">In this blog post, I'm going to explain how the gSpan algorithm works for mining frequent patterns in graphs. If you haven't already read my introductory post on this subject, please click <a href="http://simpledatamining.blogspot.ca/2015/03/graph-pattern-mining-gspan-introduction.html" target="_blank">HERE</a> to read the post that lays the foundation for what will be described in this post.<br /><br /><b><u>Getting Started</u></b><br />To explain how this algorithm works, we're going to work through a simplified data set so that we can see each step and understand what is going on. Suppose that you have 5 graphs that look like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-45YJ_Q7TXSo/VTqBd0DhM_I/AAAAAAAACOY/ZmS4PlEt2qg/s1600/5%2Bexample%2Bgraphs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="176" src="https://2.bp.blogspot.com/-45YJ_Q7TXSo/VTqBd0DhM_I/AAAAAAAACOY/ZmS4PlEt2qg/s1600/5%2Bexample%2Bgraphs.jpg" width="640" /></a></div><br />Don't get confused by the greek characters here. I used random symbols to show that this method can be used for any type of graph (chemicals, internet web diagrams, etc.) even something completely made up like what I did above. The first thing we need to do with this graph data set is count how many vertices and edges we have of each type. I count 7 alpha(<span style="background-color: #f9f9f9; color: #252525; font-family: sans-serif; font-size: 12.3199996948242px; line-height: 17.2479991912842px;">α</span>) vertices, 8 beta(<span style="background-color: #f9f9f9; color: #252525; font-family: sans-serif; font-size: 12.3199996948242px; line-height: 17.2479991912842px;">β</span>) vertices, and 14 lambda(<span style="background-color: #f9f9f9; color: #252525; font-family: sans-serif; font-size: 12.3199996948242px; line-height: 17.2479991912842px;">λ</span>) vertices. There are 15 edges that are solid blue, 13 that are double red, and 8 that are dotted green. Now that we have these counts we need to order the original labels for vertices and edges into a code that will help us prioritize our search later. To do this we create a code that starts with the vertices and edges with the highest counts and then descends like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-55dC-QKfb4g/VTqDJ13UsJI/AAAAAAAACOk/w64KmCT1CXw/s1600/Graph%2BLegend%2B(part%2B2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://2.bp.blogspot.com/-55dC-QKfb4g/VTqDJ13UsJI/AAAAAAAACOk/w64KmCT1CXw/s1600/Graph%2BLegend%2B(part%2B2).jpg" width="320" /></a></div><br />We could have used numbers instead of letters here, it doesn't really matter as long as you can keep track of the symbols' order based on frequency. If you relabel the graphs with this order you get something like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Gb09OfFlJ6Y/VTqSybF4BII/AAAAAAAACO0/-DdEigX7jIg/s1600/5%2Bexample%2Bgraphs%2Brelabeled.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="176" src="https://4.bp.blogspot.com/-Gb09OfFlJ6Y/VTqSybF4BII/AAAAAAAACO0/-DdEigX7jIg/s1600/5%2Bexample%2Bgraphs%2Brelabeled.jpg" width="640" /></a></div>You can see that the vertex labels now have A, B, C and each edge is labelled a, b, or c to match the new label legend we created.<br /><br /><b><u>Finding Frequent Edges</u></b><br />For this example we will assume that we have a <a href="http://simpledatamining.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">minimum support frequency</a> of 3 (i.e. 3 of the 5 graphs have to have the pattern for us to consider it important). We start by looking at individual edges (e.g. A,b,A) and see if they meet this minimum support threshold. I'll start by looking at the first graph at the top. There is an edge that is (B,a,C) and I need to count how many of the graphs have an edge like this. I can find it in the 1st graph, 2nd graph, 3rd graph, and 4th graph so it has support of 4; we'll keep that one. Also notice that I had the option of writing that edge as (C,a,B) as well, but chose to write it with B as the starting vertex, because of the order we established above. We move on a little and check (A,a,C) and find that it can only be found in the first graph; it gets dropped. We check (A,c,C) and find it in the first 4 graphs as well; keep it. If we continue going through all of the graphs being careful not to repeat any edge definitions we'll get a list of frequent single edges that looks like this<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-vGv68Y-P9X8/WO69Ltk8YpI/AAAAAAAADZk/HUX903ey7YAbHD_jBsyU_G3iu8FwCxNDACLcB/s1600/1%2Bedge%2Bsupport%2Bcount.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-vGv68Y-P9X8/WO69Ltk8YpI/AAAAAAAADZk/HUX903ey7YAbHD_jBsyU_G3iu8FwCxNDACLcB/s1600/1%2Bedge%2Bsupport%2Bcount.jpg" /></a></div><br />You can see that we only have 3 edges that are frequent (this will simplify our lives in this example greatly). We need to sort the edges that are still frequent in DFS lexicographic order (See <a href="http://simpledatamining.blogspot.ca/2015/03/graph-pattern-mining-gspan-introduction.html" target="_blank">prior blog post</a> for explanation of this). <span style="background-color: white;">After sorting we end up with this list of edges to consider going forward</span><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-OTrygsC2INY/VTqgCCv_3kI/AAAAAAAACPs/5d_6VCAsf2A/s1600/1%2Bedge%2Bsupport%2Bcount%2B(ordered).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-OTrygsC2INY/VTqgCCv_3kI/AAAAAAAACPs/5d_6VCAsf2A/s1600/1%2Bedge%2Bsupport%2Bcount%2B(ordered).jpg" /></a></div><br />Pay attention to the fact that I added the "0,1" to the beginning of these edge definitions. These are the starting and ending points of the edges. We'll "grow" the rest of the frequent sub-graphs from these edges in a way similar to the method described in <a href="http://simpledatamining.blogspot.ca/2015/03/graph-pattern-mining-gspan-introduction.html" target="_blank">my prior blog post on this subject</a>. <br /><br />gSpan starts with the "minimum" edge, in this case (0,1,A,b,B), and then tries to grow that edge as far as it can while looking for frequent sub-graphs. Once we can't find any larger frequent sub-graphs that include that first edge, then we move on to the 2nd edge in the list above (0,1,A,c,C) and grow it looking for frequent sub-graphs. One of the advantages of this is that we won't have to consider sub-graphs that include the first edge (0,1,A,b,B) once we've moved on to our 2nd edge because all of the sub-graphs with the first edge will have already been explored. If you didn't follow that logic, it's OK; you'll see what I mean as we continue the example.<br /><br />You might notice that the last graph in our set of graphs doesn't have any edges that are frequent. When we're starting with edge (A,b,B) we keep track of which graphs have that edge present. In our example that's graphs 1, 2, 3, & 4. Now that we have that "recorded" we start the growth from (0,1,A,b,B) by generating all of the potential children off of the 'B' vertex. <br /><br /><b><u>Growing from the 1st Edge</u></b><br />Before we grow our first edge, you need to understand how gSpan prioritizes growth. Let's assume that we have a generic subgraph that looks like this already (don't worry about how we got it yet, it will be come clear in a second)<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-aLGmgXyRqMc/VTvBKQmaLHI/AAAAAAAACQE/IpGZ7MPPHbE/s1600/Start%2Bof%2BGrowth%2BPattern%2BExplanation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-aLGmgXyRqMc/VTvBKQmaLHI/AAAAAAAACQE/IpGZ7MPPHbE/s1600/Start%2Bof%2BGrowth%2BPattern%2BExplanation.jpg" /></a></div><br />I have removed the colors and labels from the diagram above to explain how gSpan grows without confusing you with labels. The numbers represent the order the vertices were added. If you are at this point in the algorithm, you will try to grow this graph in the following order of priority.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-EeLOy4sx0sk/VTvDsfqu0mI/AAAAAAAACQQ/HiYxic7j1Mk/s1600/Pattern%2Bgrowth%2Bpriority.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://1.bp.blogspot.com/-EeLOy4sx0sk/VTvDsfqu0mI/AAAAAAAACQQ/HiYxic7j1Mk/s1600/Pattern%2Bgrowth%2Bpriority.jpg" width="400" /></a></div><br />So the first priority is always to link the current end vertex back to the first vertex (the one labelled 0). If that's not possible, we see if we can link it back to the 2nd vertex (the one labelled 1). If we had a bigger sub-graph, we would continue trying to link to the 3rd, 4th, etc. if it's possible. If none of those are possible then our next (3rd) priority is to grow from the vertex we're at (labelled 4 above), and grow the chain longer. If that doesn't work, we go back to the most recent fork, take another step back and then try to grow from there (see how 5 is growing from vertex labelled 1 in the 4th priority). If that doesn't work, then we take another step up the chain and try to grow from there (5th example above). The 4th and 5th examples above also generalize for longer chains. If there were several vertices above the closest fork in the chain, you would progressively try to grow off of each of those vertices until you get back to the root (vertex labelled '0').<br /><br />With that understanding in mind, we can finally start growing from our first edge. Our first edge is (0,1,A,b,B). If we look at the priority explanation above, we can tell that the 1st and 2nd growth priorities are not really options, so we take the 3rd priority and try to grow from vertex 1, or the B vertex in our example. Since we only have 2 frequent edges that have 'B' vertices in them our two options are as follows:<br /><br /><u>Option 1 </u> <u>Option 2</u><br />(0,1,A,b,B) (0,1,A,b,B)<br />(1,2,B,b,A) (1,2,B,a,C)<br /><br />Now we are going to check for support of these 2 options. We only need to look in the graphs that we determined contain the frequent edges as described earlier. When we look for Option 1 we find it in the following graphs left that we're considering:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3KONTyf_Ucg/VTvH2QkbeUI/AAAAAAAACQc/c7EjkZf9chQ/s1600/Count%2Bof%2B1st%2BEdge%2BGrowth.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="392" src="https://2.bp.blogspot.com/-3KONTyf_Ucg/VTvH2QkbeUI/AAAAAAAACQc/c7EjkZf9chQ/s1600/Count%2Bof%2B1st%2BEdge%2BGrowth.jpg" width="400" /></a></div><br />Notice that we can find the first option in 3 different places in the first graph, but we can't find it in any of the other graphs, so the support is only 1. That's a dead end, so we check option 2<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-1waSgqATtj0/VTvJWzvDpsI/AAAAAAAACQw/IsHQvhxwzOU/s1600/Count%2Bof%2B1st%2BEdge%2BGrowth%2B(option%2B2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="https://2.bp.blogspot.com/-1waSgqATtj0/VTvJWzvDpsI/AAAAAAAACQw/IsHQvhxwzOU/s1600/Count%2Bof%2B1st%2BEdge%2BGrowth%2B(option%2B2).jpg" width="400" /></a></div><br />You can see that we can find this pattern in multiple places in all of the 4 graphs we have left (I didn't show all of the places it is found in the first graph). The support for this sub-graph is 4 so it's a keeper. Also, all 4 graphs we looked at have this sub-graph present so we keep those too. That ends that round of edge growth so we take option 2 from above and try to grow from it. Remembering the priorities of growth, we know we need to try to connect the last vertex (2 or 'C' in our example) to the root (0 or 'A' in our example) if we can. If we look at the list of frequent edges to choose from, we see an edge that is (A,c,C) which could be written as (2,0,C,c,A). This would make the connection we need. None of the other frequent edges meet this criteria, so we check to see if (0,1,A,b,B); (1,2,B,a,C); (2,0,C,c,A) is frequent<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-XxLVBv4w4eo/VTvLoWZGjQI/AAAAAAAACRA/NoQmSBqmzfk/s1600/2nd%2BEdge%2BGrowth%2BCount.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="https://4.bp.blogspot.com/-XxLVBv4w4eo/VTvLoWZGjQI/AAAAAAAACRA/NoQmSBqmzfk/s1600/2nd%2BEdge%2BGrowth%2BCount.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">This time this sub-graphs occurs twice in the 1st, 2nd and 4th graph. It only occurs once in the 3rd graph. Regardless, the support count for this sub-graph is still 4 so we've got a keeper again. Now we have a graph that looks something like this.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-mHeix-bSHGM/VTvODppIFcI/AAAAAAAACRU/tJHczTBeN98/s1600/3%2BEdge%2Bsubgraph.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-mHeix-bSHGM/VTvODppIFcI/AAAAAAAACRU/tJHczTBeN98/s1600/3%2BEdge%2Bsubgraph.jpg" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: left;">Since we have already connected the last vertex ('C') to the root ('A') and to the vertex after it ('B'), our next priority is to grow from the last vertex we created. The frequent edges we can try to add onto 'C' are (B,a,C) and (A,c,C). However, since we're working from 'C' they would be better written this way: (2,3,C,a,B) and (2,3,C,c,A). We'll start by checking if adding (2,3,C,a,B) works because it is the lowest in terms of <a href="http://simpledatamining.blogspot.ca/2015/03/graph-pattern-mining-gspan-introduction.html" target="_blank">DFS lexicographic order</a>. Here's what we find.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-5bSF1zZQI2A/VTvP_Xi5OHI/AAAAAAAACRg/NoSYfRYanak/s1600/Adding%2BC%2Ca%2CB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="263" src="https://1.bp.blogspot.com/-5bSF1zZQI2A/VTvP_Xi5OHI/AAAAAAAACRg/NoSYfRYanak/s1600/Adding%2BC%2Ca%2CB.jpg" width="400" /></a></div><br />As it turns out, this 4 edge pattern can only be found in 2 of the 4 graphs we have left so that doesn't work. Now we'll check if (2,3,C,c,A) works out.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-MH7-AFLxb1s/VTvQqMyCI4I/AAAAAAAACRo/XYH2SQPzzt8/s1600/Adding%2BC%2Cc%2CA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="143" src="https://2.bp.blogspot.com/-MH7-AFLxb1s/VTvQqMyCI4I/AAAAAAAACRo/XYH2SQPzzt8/s1600/Adding%2BC%2Cc%2CA.jpg" width="400" /></a></div><br />Nope, that's only in the first graph so it's not frequent either. Since neither of these worked out, we need to go to our next priority (4th) which is growing from the vertex a step up from the last attempted branch. When you look at the sub-graph we've grown so far you can tell we need to try to grow from 'B' now. That being said, we take a look at our options for frequent edges that contain 'B' vertices and we see that we should try to add (B,a,C) or (B,b,A). Here's what trying to add (B,a,C) looks like in this scenario<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-2Y_nCyI5hAY/VTvSOh_CiUI/AAAAAAAACR0/Cq2oIwfK6_w/s1600/Branch%2Bfrom%2BB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-2Y_nCyI5hAY/VTvSOh_CiUI/AAAAAAAACR0/Cq2oIwfK6_w/s1600/Branch%2Bfrom%2BB.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Let's see if we can find that in any of the graphs</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-O7eFOSq8Ik8/VTvTBjo5PuI/AAAAAAAACR8/4JX346YnqQQ/s1600/4%2Bedge%2Bsubgraph%2Bwith%2Bsupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="143" src="https://1.bp.blogspot.com/-O7eFOSq8Ik8/VTvTBjo5PuI/AAAAAAAACR8/4JX346YnqQQ/s1600/4%2Bedge%2Bsubgraph%2Bwith%2Bsupport.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div> As it turns out, we CAN find this sub-graph in 3 of the 4 graphs we have left so that's a keeper. At this point in time I want you to notice that I am going to ignore the other edge that could grow from 'B' until later. This is because gSpan is a depth first algorithm, so once you get success you try to go deeper. When we hit a dead end, we will come back to all of these straggling options to see if they actually work. If you're familiar with computer programming, this type of algorithm is implemented with recursion. Our next step based on our priority rules is going to be to try to link our vertex 3 ('C') to the root, or the 'A'. Again the only frequent edge we can choose from to do this is (C,c,A). This option would look like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-x4INTc3SuG8/VTvUjyK93LI/AAAAAAAACSI/rWxgdiwaUuI/s1600/2nd%2Blink%2Bback%2Bto%2Broot.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-x4INTc3SuG8/VTvUjyK93LI/AAAAAAAACSI/rWxgdiwaUuI/s1600/2nd%2Blink%2Bback%2Bto%2Broot.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>At this point, remember that the last sub-graph that we found that was frequent could only be found in the first 3 graphs in our set, so now we only have to look for this new graph in those three graphs<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Z1XCLOOZM-Q/VTvVYAwkrlI/AAAAAAAACSQ/LiqIjIWeIZQ/s1600/1st%2Bpriority%2Bdead%2Bend.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://2.bp.blogspot.com/-Z1XCLOOZM-Q/VTvVYAwkrlI/AAAAAAAACSQ/LiqIjIWeIZQ/s1600/1st%2Bpriority%2Bdead%2Bend.jpg" width="400" /></a></div><br />This only exists in one of our graphs, so we go to our next growth priority. Since there is already a link between our 'C' at vertex 3 and the 'B' and vertex 1, the 2nd priority isn't an option, so we try to grow/extend from the 'C' at vertex 3 again. We can either add (C,a,B) or (C,c,A). Adding (C,a,B) would look like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ZM3wi2B7HMQ/VTvXz1ZTJxI/AAAAAAAACSk/Q-GIaKYD79A/s1600/5%2Bedge%2Badding%2BCaB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-ZM3wi2B7HMQ/VTvXz1ZTJxI/AAAAAAAACSk/Q-GIaKYD79A/s1600/5%2Bedge%2Badding%2BCaB.jpg" /></a></div><br />If we look in the 3 graphs we have left we find this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Y-Ma9KeGrB4/VTvYb7Mz2jI/AAAAAAAACSs/ah7UWWw7-kE/s1600/5%2Bedge%2Bdead%2Bend%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="187" src="https://2.bp.blogspot.com/-Y-Ma9KeGrB4/VTvYb7Mz2jI/AAAAAAAACSs/ah7UWWw7-kE/s1600/5%2Bedge%2Bdead%2Bend%2B1.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>That's a dead end because we only have 1 graph with this 5 edge sub-graph. We back up and try adding (C,c,A) instead. I'm not going to do the visuals for this, but it can't be found in any of the graphs we have left either. Since we hit a dead end again here, we go to to our 5th priority growth option: growing from the root again. If we try to grow from the root 'A' we can either use (A,b,B) or (A,c,C). Starting with (A,b,B) we would get a sub-graph that looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/--c7Q3ddtpvw/VTvbUZFviaI/AAAAAAAACTA/Awm2olXrpoQ/s1600/Root%2Bgrowth%2Battempt%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/--c7Q3ddtpvw/VTvbUZFviaI/AAAAAAAACTA/Awm2olXrpoQ/s1600/Root%2Bgrowth%2Battempt%2B1.jpg" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>This can't be found in any of our graphs, so we try adding (A,c,C) as our last ditch effort<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-1JDYM8O7_2Y/VTvbWeUFHxI/AAAAAAAACTI/6g3eI5zFnzs/s1600/Root%2Bgrowth%2Battempt%2B2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-1JDYM8O7_2Y/VTvbWeUFHxI/AAAAAAAACTI/6g3eI5zFnzs/s1600/Root%2Bgrowth%2Battempt%2B2.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>This can't be found either, so we have reached the end of this depth first search. Now we need to go back and take care of the straggler options that we didn't pursue because we were going for depth first. The last straggling option we left was when we were considering adding (B,b,A) to vertex 1.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-gNcmG82BYtI/VTvcXs0oPZI/AAAAAAAACTU/O-A1MZVt7Ts/s1600/Backtracking%2Boption%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-gNcmG82BYtI/VTvcXs0oPZI/AAAAAAAACTU/O-A1MZVt7Ts/s1600/Backtracking%2Boption%2B1.jpg" /></a></div><br />Remember, at that point in time, we were still looking at 4 of the 5 original graphs so when we look for this sub-graph in those 4 graphs this is what we find.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-vUx9-3ktpzk/VTvdFqw3WiI/AAAAAAAACTc/ISFGGcMc6JA/s1600/B%2Cb%2CA%2Bbacktrach%2Bdead%2Bend.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="143" src="https://1.bp.blogspot.com/-vUx9-3ktpzk/VTvdFqw3WiI/AAAAAAAACTc/ISFGGcMc6JA/s1600/B%2Cb%2CA%2Bbacktrach%2Bdead%2Bend.jpg" width="400" /></a></div><br />We only find it in one graph, so that turns into a dead end too. <br /><br /><i><u>This Section Until "</u></i><i><u>2nd Edge (A,c,C)" is a Correction (Thanks to Elife Öztürk) </u></i><br />If we look back further in our algorithm we find that we also haven't checked all of the growth priorities off of the first frequent edge (0,1,A,b,B). Our first growth came from using the 3rd priority growth strategy, but we haven't checked the 4th/5th priority grown strategies there yet. If we do, we can see that it might be possible to make a link like (0,1,A,b,B) and (0,2,A,c,C). Let's see if we can find that in the graphs<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-iMS6Waw_ufE/V5oXuc0_MII/AAAAAAAACi4/CUuSu01QHrQRcNaabFjzU8AYiop0BN-HQCLcB/s1600/5th%2BPriority%2B1st%2BEdge%2BGrowth%2B%2528Correction%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="https://3.bp.blogspot.com/-iMS6Waw_ufE/V5oXuc0_MII/AAAAAAAACi4/CUuSu01QHrQRcNaabFjzU8AYiop0BN-HQCLcB/s400/5th%2BPriority%2B1st%2BEdge%2BGrowth%2B%2528Correction%2529.jpg" width="400" /></a></div><br /><br />We can see that in all four of our remaining graphs so we'll keep this sub-graph. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-VaVyJURjao0/V5oYc7K3crI/AAAAAAAACjA/BA_leCBVJxkOe7uEDpch5UixoqaZjQNnwCLcB/s1600/5th%2BPriority%2B1st%2BEdge%2BGrowth%2BSubgraph.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-VaVyJURjao0/V5oYc7K3crI/AAAAAAAACjA/BA_leCBVJxkOe7uEDpch5UixoqaZjQNnwCLcB/s1600/5th%2BPriority%2B1st%2BEdge%2BGrowth%2BSubgraph.jpg" /></a></div><br />Our first priority for growth on this sub-graph would be to connect back to the root, but we don't have any double bonds (sorry for the chemistry reference) in our example so that won't work. The 2nd priority is also impossible, so let's check the 3rd. Can we grow an edge off of the C node? Our 2 options would be (2,3,C,a,B) and (2,3,C,c,A). Adding (2,3,C,a,B) looks like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-8nAjHCqsvVs/V5obRV6xH6I/AAAAAAAACjM/uhXToi5TKLsvpj28U0RrFzrwESULsXKLACLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCaB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-8nAjHCqsvVs/V5obRV6xH6I/AAAAAAAACjM/uhXToi5TKLsvpj28U0RrFzrwESULsXKLACLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCaB.jpg" /></a></div><br /><br />When we search through the graphs, we find this in 2 places<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-JyBLR3K_lzY/V5od2sxv7lI/AAAAAAAACjY/3yfPK0RheAYyAO679S9U03HNvON-3i_5ACLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCaB%2528finding%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="https://1.bp.blogspot.com/-JyBLR3K_lzY/V5od2sxv7lI/AAAAAAAACjY/3yfPK0RheAYyAO679S9U03HNvON-3i_5ACLcB/s640/Correction%2B-%2B2nd%2Bgrowth%2BCaB%2528finding%2529.jpg" width="640" /></a></div><br />Notice that I'm not counting the instances in the 2nd and 3rd graphs. This is because the only way you can get find this subgraph in those graphs is to wrap the graph such that the 1 B node is the same as the 3 B node. The reason why we number the nodes is to make sure they're unique, plus we already found the sub-graph that creates that ABC triangle previously...thus the beauty of the algorithm :). Since there are only 2 graphs with this sub-graph, the minimum support of 3 is not met.<br /><br />So now let's check for the other growth option (2,3,C,c,A)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Hj_EGwcPys0/V5ogOAJOvAI/AAAAAAAACjk/djZqtBh6l9gNmqblxbZbTpB_jzNjgh2twCLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCcA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-Hj_EGwcPys0/V5ogOAJOvAI/AAAAAAAACjk/djZqtBh6l9gNmqblxbZbTpB_jzNjgh2twCLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCcA.jpg" /></a></div><br />That can only be found in one of the graphs<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ppDUyVZBaEU/V5og-Aszi9I/AAAAAAAACjo/KZHKs2hrq0EbHtzGQVwIFGIec8qdq2-_QCLcB/s1600/Correction%2B-%2B2nd%2Bgrowth%2BCcA%2528finding%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="232" src="https://4.bp.blogspot.com/-ppDUyVZBaEU/V5og-Aszi9I/AAAAAAAACjo/KZHKs2hrq0EbHtzGQVwIFGIec8qdq2-_QCLcB/s640/Correction%2B-%2B2nd%2Bgrowth%2BCcA%2528finding%2529.jpg" width="640" /></a></div><br />So now we've exhausted our growth options off of the first edge (A,b,B).<br /><br /><b><u>2nd Edge (A,c,C)</u></b><br />Before you go crazy thinking that the other 2 edges we need to check will take just as long, let me tell you that we have already done WAY more than half of the work on this problem. Now that we have exhaustively checked all of the graphs for frequent sub-graphs that contain the edge (A,b,B), we can trim down our search space significantly. What I mean is we can delete edges in graphs that contain (A,b,B) and only look for frequent patterns in what is left. If we do this, we get 4 graphs (remember we already got rid of graph 5 because it contained no frequent edges) that look like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-TDWruyJAP5w/VTvwDkmJqOI/AAAAAAAACVE/X8mnJQJ4Bgk/s1600/Stripped%2Bdown%2B4%2Bgraphs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://4.bp.blogspot.com/-TDWruyJAP5w/VTvwDkmJqOI/AAAAAAAACVE/X8mnJQJ4Bgk/s1600/Stripped%2Bdown%2B4%2Bgraphs.jpg" width="400" /></a></div><br />These graphs are much simpler. I also wanted to point out that I removed any edges from these graphs that were not frequent. We could (should) have done this earlier when we were doing our search on the first edge, but I didn't want to complicate things at that time. Now, much like last time, we start with edge (0,1,A,c,C) and try to grow from 'C'. We can either use (1,2,C,a,B) or (1,2,C,c,A). If we add (1,2,C,a,B) we can find that sub-graph in the following places<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-NlPd-QXdbpY/WP-mBhaMKaI/AAAAAAAADZ8/8jYT3uqZpLUqGszi7TQ7z69nhon2KKklACLcB/s1600/ACB%2Bfrequent.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="https://1.bp.blogspot.com/-NlPd-QXdbpY/WP-mBhaMKaI/AAAAAAAADZ8/8jYT3uqZpLUqGszi7TQ7z69nhon2KKklACLcB/s400/ACB%2Bfrequent.jpg" width="400" /></a></div><br />All 4 graphs have this sub-graphs, so it's a keeper and we'll go with it (remember we haven't checked adding (1,2,C,c,A) yet. The next step is to try to grow from the last vertex back to the root. If we did this we would have to use the edge (B,b,A), but that edge has been removed from all of our graphs because we have already mined all of the frequent sub-graphs with (A,b,B) in them. So, our only other growth option is to try to extend the chain from 'B'. The only option left to grow from 'B' is (B,a,C) because (B,b,A) has already been checked. So we get a sub-graph like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ht8vt5Vv5CA/VTvrY5hImII/AAAAAAAACUQ/FURHt4-9W4k/s1600/Long%2BBaC%2BChain.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-ht8vt5Vv5CA/VTvrY5hImII/AAAAAAAACUQ/FURHt4-9W4k/s1600/Long%2BBaC%2BChain.jpg" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>This can be found in the following graphs<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-L9X5qb-cJKc/WP-m7Z0KRRI/AAAAAAAADaE/bX7LyFkT9n0ESBwrE5ErF4m6e3NwA7zFgCLcB/s1600/Long%2BBaC%2BChain%2Bfound.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://3.bp.blogspot.com/-L9X5qb-cJKc/WP-m7Z0KRRI/AAAAAAAADaE/bX7LyFkT9n0ESBwrE5ErF4m6e3NwA7zFgCLcB/s400/Long%2BBaC%2BChain%2Bfound.jpg" width="400" /></a></div><br />That's 3 out of the 4 graphs, which meets our support requirement so we'll keep it. Our first growth priority from here is to link node 3 ('C') back to the root. That would be adding a (3,0,C,_,A) link. The only link possible for this in our data set is (3,0,C,c,A) which would give us this subgraph to look for<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-NvP0bakjbUA/WP-qE5bTDfI/AAAAAAAADaQ/rNUVKcYDJ8IaI_Ve77mgvguU3veQZiZEQCLcB/s1600/correction_2_subgraph_check.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-NvP0bakjbUA/WP-qE5bTDfI/AAAAAAAADaQ/rNUVKcYDJ8IaI_Ve77mgvguU3veQZiZEQCLcB/s1600/correction_2_subgraph_check.jpg" /></a></div><br />The only place this subgraph is found is in our 2nd example:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-ewQOzC0PHGs/WP-qZy-Od7I/AAAAAAAADaU/bn1-65HfLFEbK0IGI_QU6CxrJ5K31VM7wCLcB/s1600/correction_2_subgraph_search.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://2.bp.blogspot.com/-ewQOzC0PHGs/WP-qZy-Od7I/AAAAAAAADaU/bn1-65HfLFEbK0IGI_QU6CxrJ5K31VM7wCLcB/s400/correction_2_subgraph_search.jpg" width="400" /></a></div>So, that's a dead end. If we back up, our next growth priority would be to connect the C at node 3 back to the C at node 1. A quick look at the graphs we have tells me that's not going to work. The next priority after that is to add another node after node 3. We could look for a subgraph with C,a,B added to the end that looks like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-jNTYjGPyjOM/WP-sI1fu0iI/AAAAAAAADag/U1F6yXrkQ0YuNZC0nzSxZgteGe631NiIACLcB/s1600/correction_2_subgraph_check%25282%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-jNTYjGPyjOM/WP-sI1fu0iI/AAAAAAAADag/U1F6yXrkQ0YuNZC0nzSxZgteGe631NiIACLcB/s1600/correction_2_subgraph_check%25282%2529.jpg" /></a></div><br />However, that subgraph is only found here:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-fbV9jVhvk8Y/WP-smFNeTwI/AAAAAAAADak/srJctkw4NAg2keymhHEdL439zGPWyx1OACLcB/s1600/correction_2_subgraph_search%25282%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="148" src="https://4.bp.blogspot.com/-fbV9jVhvk8Y/WP-smFNeTwI/AAAAAAAADak/srJctkw4NAg2keymhHEdL439zGPWyx1OACLcB/s400/correction_2_subgraph_search%25282%2529.jpg" width="400" /></a></div><br />So, we back up again and try our 4th priority, which is growing a link off of the B at node 2. A quick look at the graphs above show that won't work. Lastly, we'll try to grow further up the graph, like a new node coming off the C at node 2 or a new node off the A at node 0. There's a couple options for growth off of node 2, but I can tell it's only supported by our first graph. There are no additional options for growth from the root node 'A'. so we're done exploring this graph branch.<br /><br />Now we need to step back to when we had the graph (0,1,A,c,C), (1,2,C,a,B) and try to grow again from there. The last priority for growth for this subgraph is to grow from the root. The only way we can grow from the root is to add (A,c,c), which would look like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-TuYlAyGybhA/VTvs1ObqcOI/AAAAAAAACUk/DBJzZRtLwqU/s1600/Double%2BAcC%2Broot.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-TuYlAyGybhA/VTvs1ObqcOI/AAAAAAAACUk/DBJzZRtLwqU/s1600/Double%2BAcC%2Broot.jpg" /></a></div>Checking the four graphs we have left, we only find this sub-graph in our 2nd graph<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-fpGRa4k8VMg/VTvxfQ7x5CI/AAAAAAAACVc/-DtAK3ck-1M/s1600/AcC%2Btwice%2Bfrom%2BRoot.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://4.bp.blogspot.com/-fpGRa4k8VMg/VTvxfQ7x5CI/AAAAAAAACVc/-DtAK3ck-1M/s1600/AcC%2Btwice%2Bfrom%2BRoot.jpg" width="400" /></a></div><br />Since this is only found in one place, it's a dead end and we need to backtrack to other options we didn't pursue earlier. We only had one of those when we were adding our 2nd edge to (A,c,C). So when we go back and try to add (1,2,C,c,A) to (0,1,A,c,C) we look at our graphs and find that it is only present in 2 of our graphs<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-yWzOkh3dUiY/VTvyFWEQdfI/AAAAAAAACVo/IHY5HvrczeQ/s1600/Last%2BAcC%2Bfound.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://1.bp.blogspot.com/-yWzOkh3dUiY/VTvyFWEQdfI/AAAAAAAACVo/IHY5HvrczeQ/s1600/Last%2BAcC%2Bfound.jpg" width="400" /></a></div><br />As it turns out these are all of our options starting with edge (A,c,C). See, I told you the 2nd edge would go faster than the first.<br /><br /><b><u>3rd Edge (B,a,C)</u></b><br />Now we have to check the last of our frequent edges to see if there is any way that we can grow a frequent sub-graph with just this edge. Let's start by removing all of the (A,c,C) edges from our graphs to see what we're left with.<br /><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-3bz0ua0Ek1k/VTvyiHTWhlI/AAAAAAAACV4/MrpSYkqS8lY/s1600/BaC%2BOnly%2BGraphs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="https://1.bp.blogspot.com/-3bz0ua0Ek1k/VTvyiHTWhlI/AAAAAAAACV4/MrpSYkqS8lY/s1600/BaC%2BOnly%2BGraphs.jpg" width="400" /></a></div><br />I could go through all of the same procedure I have been using for these remaining graphs, but it gets so simple at this point that there really isn't a reason to. If you're using a program runing the gSpan algorithm, it will be disciplined and do the whole search though. From what is left, we can see that (0,1,B,a,C); (0,2,B,a,C) is frequent, but anything larger doesn't work out. We hit our last dead end and now we can report all of the sub-graphs that we found that were frequent.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-cxDPqbLVt5k/WP-wy4fIAXI/AAAAAAAADa0/nAA62FWoYDk6lMlfMnfcbQrtvMTvurXzACEw/s1600/All%2BFrequent%2BSub-graphs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://2.bp.blogspot.com/-cxDPqbLVt5k/WP-wy4fIAXI/AAAAAAAADa0/nAA62FWoYDk6lMlfMnfcbQrtvMTvurXzACEw/s640/All%2BFrequent%2BSub-graphs.jpg" width="580" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br />I have them organized by the edge they were grown from. The last thing we have to do is translate our DFS codes back to the original symbols. This is easy enough and the user would get an output like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-13R_MI_i2hg/WP-xRska8sI/AAAAAAAADa4/_qnUKl186o01lexlskAqEHhn4NCoi6EzACLcB/s1600/Final%2BAnswer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://2.bp.blogspot.com/-13R_MI_i2hg/WP-xRska8sI/AAAAAAAADa4/_qnUKl186o01lexlskAqEHhn4NCoi6EzACLcB/s640/Final%2BAnswer.jpg" width="570" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>That's all there is to it. I know that this post got a little long, but for me, being able to see the whole problem worked out helps me learn a TON. I hope it helps somebody else out there too. I did gloss over some details of the programming, etc. but this is essentially what's happening in gSpan. Let me know if you have any additional questions.</div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-41256569582352535552015-04-18T14:51:00.000-04:002018-05-18T14:57:24.483-04:00Simple PageRank Algorithm Description<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><div dir="ltr" style="text-align: left;" trbidi="on">This blog post will give a simple explanation of the original PageRank algorithm This is the algorithm that differentiated Google from the web search incumbents in the late 1990's (e.g. Yahoo!, Altavista, etc.) and helped to make it what it is today. I will be using material I learned in ChengXiang Zhai's "Text Retrieval and Search Engines" Coursera course, as well as the original technical paper on PageRank by Sergei Brin and Larry Page entitled "The PageRank Citation Ranking: Bringing Order to the Web", 1998. <br /><br />For this blog post I'm going to use a VERY simplified example of the web so that we can work through it and understand what is happening. The example below has only 6 web pages, but has the complexity we need for our discussion.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-JNmV9fB-kxU/VTE5YOdIyUI/AAAAAAAACKo/SO2FPu8YZq4/s1600/PageRank%2BWeb%2BDiagram%2BSample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="296" src="https://1.bp.blogspot.com/-JNmV9fB-kxU/VTE5YOdIyUI/AAAAAAAACKo/SO2FPu8YZq4/s1600/PageRank%2BWeb%2BDiagram%2BSample.jpg" width="320" /></a></div><br />We'll consider each of the items in the network (e.g. A, B, C) to be webpages. PageRank can actually be used for other things like social networks, etc., but we'll stick with the web page version for this post. Each black arrow represents a link from where the arrow starts to where the arrow ends. For example, webpage B has a link to webpage A, and that's it. Webpage A has links to webpage B, and D. These links, and the somewhat hidden information that they contain, are the most important part of the PageRank algorithm. <br /><br />Before PageRank, there were actually lots of people that tried to use the information from these links to help with search. These mostly focused on the count of how many links a webpage had pointing to it. If we use this type of thinking page A seems to be the most important with 4 links to it, C and D come in 2nd place with 2, B comes in 3rd with 1 and E and F come in dead last with 0. This may seem to make sense, but there is something interesting that this method misses. Since page A has so many links, it's pretty obvious it is important, but since it's important, the links that page A has to other pages are more important that other links in the network. PageRank takes this into account.<br /><br />Imagine you are web-surfing in the diagram above. Say you pick a web page at random and then read it, find a link on that webpage and click it, read that webpage, find a link on that webpage and click it...lather, rinse, repeat...until you get bored, or stuck. At this point you decide to not follow a link from the page you're on, but to just pick a webpage at random from the web. Once you're on this new random webpage you start reading and clicking on links all over again. If you did all of this, this is essentially what the PageRank algorithm does. It follows this type of random web-surfing and then calculates the probability of arriving at all of the pages in the web.<br /><br />Now that I've explained that process conceptually, lets get into the math that matches this type of random web-surfer behaviour. If we start by creating a grid/table where we put a 1 for links from one webpage to another, we'd get something like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-A-rGe0Kmlmg/VTE9xZNSXGI/AAAAAAAACK0/gUcFIDD_kRo/s1600/Webpage%2BConnection%2BMap.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="https://4.bp.blogspot.com/-A-rGe0Kmlmg/VTE9xZNSXGI/AAAAAAAACK0/gUcFIDD_kRo/s1600/Webpage%2BConnection%2BMap.jpg" width="320" /></a></div><br />This is helpful because now we can do some analysis on this table, but it's not really what we need. If you're at a webpage, we want to be able to determine how likely you are to pick one of the links on the webpage. If there is only one link on the webpage, this is easy; 100%. If there are 2 or more, the PageRank algorithm just divides this probability up evenly. If there are 2 links, each link has a 1 out of 2 chance of getting clicked (as a side note, in my opinion it would be more interesting if these probabilities were altered based on where the links were on the screen, or some other characteristic like font size/color). If we slightly alter our table above to give these probabilities, instead of just the links, it will look like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-fvkSuad1UVo/VTFAF1nq2tI/AAAAAAAACLI/HPJjngULG1s/s1600/Webpage%2BProbability%2BMap.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="https://1.bp.blogspot.com/-fvkSuad1UVo/VTFAF1nq2tI/AAAAAAAACLI/HPJjngULG1s/s1600/Webpage%2BProbability%2BMap.jpg" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>This matrix is called the transition matrix (later we'll refer to this matrix as M). As mentioned above when describing the random web-surfer, there are 2 parts to this algorithm: clicking links and jumping to pages without links. The transition matrix is useful for the link clicking part of the calculation. Let's say that we start at time t=0 by randomly selecting one of the webpages to start from (each page has a 1 out of 6 chance of being the page we start on) and we want to figure out the probability that our next clicked link will take us to webpage A. To do this we would calculate this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-hd7lYYUAmWE/VTFz1LoAalI/AAAAAAAACLY/IM1llMLbllo/s1600/Prob%2Bof%2BA%2Bon%2B2nd%2BStep.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="171" src="https://4.bp.blogspot.com/-hd7lYYUAmWE/VTFz1LoAalI/AAAAAAAACLY/IM1llMLbllo/s1600/Prob%2Bof%2BA%2Bon%2B2nd%2BStep.jpg" width="640" /></a></div><br />For our example this equals 1/2, 50%, or 0.5 (however you want to write it). The method can be generalized to all of the webpages after we pick a starting webpage. The generalize version of this equation looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-XZCnmuPQy8g/VTF04KW2fpI/AAAAAAAACLk/iM98_59sNPE/s1600/General%2Bequation%2Bfor%2Bjust%2Bclicking.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-XZCnmuPQy8g/VTF04KW2fpI/AAAAAAAACLk/iM98_59sNPE/s1600/General%2Bequation%2Bfor%2Bjust%2Bclicking.jpg" /></a></div><br />Don't freak out, I'm going to explain what all the symbols mean. The part of the left of the equal sign is just saying that you're going to be calculating the probability that the next click (t+1) is going to land on page dj. 'j' is a subscript that helps you keep track of what page you're calculating the probability for. In our transition matrix above, if j=3, then dj = C. The right side is a summation starting at 1 and going to N, where N is the number of pages in your web (in our case N=6). Mij is the element from the transition matrix in row i and column j. pt(di) is the probability at the current time that you have arrived at page di. Since we're still on our first step (t=0), we set pt(di) equal to 1/6 in our example. Now we can calculate the probability of being on all of the pages in our web (not just A) at t=1. The result looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-CPVnYCjrJdY/VTF6uId22yI/AAAAAAAACL0/aW7xNdOoCBk/s1600/First%2Bclicking%2Biteration.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="58" src="https://2.bp.blogspot.com/-CPVnYCjrJdY/VTF6uId22yI/AAAAAAAACL0/aW7xNdOoCBk/s1600/First%2Bclicking%2Biteration.jpg" width="400" /></a></div><br />You can see that because there are no webpages that point to pages E and F, we have 0 probability of being on those pages after we've clicked just once. The PageRank algorithm gets to rankings of all of the pages after many repetitions of what we just did. I'll walk you through another step (t=1 to t=2) so you get the feel of it. Let's calculate the probability that we're on page A at t=2. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-jmnB4UklAJ4/VTF8uFcBaNI/AAAAAAAACMA/Wk95aHgfG-c/s1600/pt2(A)%2Bequation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="42" src="https://4.bp.blogspot.com/-jmnB4UklAJ4/VTF8uFcBaNI/AAAAAAAACMA/Wk95aHgfG-c/s1600/pt2(A)%2Bequation.png" width="640" /></a></div><br />This is very similar to the step from t=0 to t=1. The first value in each multiple is from the transition matrix (M) and the 2nd value comes from the probability of being at each page at time t=1. Once you get the hang of this, it can be repeated quickly and converges to a final answer very quickly. Here's how our web diagram converges.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-18WinsjBB6s/VTF9aAjx43I/AAAAAAAACMI/8Y3KxqYOIYk/s1600/Clicking%2Bconvergence.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://3.bp.blogspot.com/-18WinsjBB6s/VTF9aAjx43I/AAAAAAAACMI/8Y3KxqYOIYk/s1600/Clicking%2Bconvergence.jpg" width="391" /></a></div><br />When we get to about 21 cycles, we see that nothing is really changing and now we have values that can tell us how important each of the webpages are. Based on this example, page A is most important, then pages B and D tie for 2nd. This is not what we got when we just counted how many links each website had. Page B only has 1 link, but it is linked by page A which is the most important, so that link counts for more. Remember though, that this isn't the whole page rank algorithm. We have to add in the random jumping to a different page now. To do this we have to set a probability that, at any given time, we'll "get bored" and jump to another page. Let's say that we have a 15% chance of getting bored on the page we happen to be on. We'll call this value alpha (α=15%)<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-4soAj0JJlbA/VTF_JukkZdI/AAAAAAAACMU/9N6aCM-TnPI/s1600/Equation%2Bwith%2Brandom%2Bjumping%2Badded.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="131" src="https://4.bp.blogspot.com/-4soAj0JJlbA/VTF_JukkZdI/AAAAAAAACMU/9N6aCM-TnPI/s1600/Equation%2Bwith%2Brandom%2Bjumping%2Badded.jpg" width="400" /></a></div><br />Here you can see that the first term in the probability is basically the same, but that it get's multiplied by (1-α). This is because we need to discount this probability by the amount that we get get bored and randomly pick a page. The 2nd term is adding in the probability of randomly jumping to another page. The 2nd term could be simpler; it could just be α*(1/N). It is written the way it is because this equation can be turned into a matrix math problem, which I'll show you later. For not just go with it.<br /><br />If we start over at t=0, using this new form of the equation, we can calculate a couple examples and show you how this equation converges as well. If we want to calculate the probability of being at webpage A at t=1 we would do the following:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-wP0hu47-38o/VTKNp08u-HI/AAAAAAAACNA/7staknxPSN0/s1600/First%2Bstep%2Bequation%2Bclicking%2Band%2Bjumping.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="147" src="https://4.bp.blogspot.com/-wP0hu47-38o/VTKNp08u-HI/AAAAAAAACNA/7staknxPSN0/s1600/First%2Bstep%2Bequation%2Bclicking%2Band%2Bjumping.jpg" width="640" /></a></div><br />You can see that in the first term in the first line, I just used the value we calculated in our prior example without jumping. I also expanded out the second summation so that it is easier to see. The next 2 lines substitute the actual values into the equation and simplify to an answer. If you follow this calculation for the other webpages you get this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-fGLFXqqBf9U/VTKOZE-g5RI/AAAAAAAACNI/o8VOQb_Zh0k/s1600/First%2Bclicking%2Band%2Bjumping%2Biteration.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="58" src="https://2.bp.blogspot.com/-fGLFXqqBf9U/VTKOZE-g5RI/AAAAAAAACNI/o8VOQb_Zh0k/s1600/First%2Bclicking%2Band%2Bjumping%2Biteration.jpg" width="400" /></a></div><br />I'll walk you through one more step and then show you the whole table and how it converges to a final answer. If we want to calculate the probability of being at webpage A at time t=2 we get this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-o_zdmNbxNek/VTKScUZi1II/AAAAAAAACNU/yyc2xsaZtYQ/s1600/second%2Bstep%2Bequation%2Bclicking%2Band%2Bjumping.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="206" src="https://1.bp.blogspot.com/-o_zdmNbxNek/VTKScUZi1II/AAAAAAAACNU/yyc2xsaZtYQ/s1600/second%2Bstep%2Bequation%2Bclicking%2Band%2Bjumping.jpg" width="640" /></a></div><br />The first thing I want you to notice is that I couldn't just use the value from the old method for the first summation. This is because the probability of being at webpage A at t=1 has changed. So, we get to do all of the math this time. Remember that the M terms here come from the transition matrix. Because we're calculating the probability of being at webpage A, we take the values from column A of the transition matrix. The rest is basically the same as the last example. We can do this for all of the webpages (I did it in Excel for this post) and get the table below<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-SvF3SAbJL-A/VTGC6ooM_eI/AAAAAAAACMg/s_UbqwQABlU/s1600/Clicking%2Band%2Bjumping%2Bconvergence.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="375" src="https://1.bp.blogspot.com/-SvF3SAbJL-A/VTGC6ooM_eI/AAAAAAAACMg/s_UbqwQABlU/s1600/Clicking%2Band%2Bjumping%2Bconvergence.jpg" width="400" /></a></div><br />You can see that this table converges where values really aren't changing any more by the time it gets to t=17. The decision of when to stop iterating is based on a user defined variable. In the original PageRank white paper, the authors suggest that you loop through 't' until a value, delta (<span style="background-color: #f9f9f9; color: #252525; font-family: sans-serif; font-size: 12.3199996948242px; line-height: 17.2479991912842px;">δ</span>) gets smaller than this user defined value epsilon (<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">Ɛ</span>). To calculate delta you need to know this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-RdLw1ERrmk4/VTKXLBkF80I/AAAAAAAACNg/khJfb1bcewg/s1600/Delta%2Band%2BL1%2BNorm.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="https://4.bp.blogspot.com/-RdLw1ERrmk4/VTKXLBkF80I/AAAAAAAACNg/khJfb1bcewg/s1600/Delta%2Band%2BL1%2BNorm.jpg" width="320" /></a></div><br />The first line shows how to calculate delta, but if you're anything like me, you may have never been exposed to an "L1 Norm" before. Each of the R terms inside the double bars is a vector containing the probabilities we've calculated for all of the webpages at the given time. So R1 = {0.45, 0.095833, 0.2375, 0.1666, 0.025, 0.025} using the clicking and jumping values from our most recent example. R2 would be {0.389792, 0.21625, 0.117083, 0.226875, 0.025, 0.025}. To calculate our delta value we first have to subtract each term in R1 from R2, then we will take the L1 norm of the result. R2 - R1 = {-0.06021, 0.120417, -0.12042, 0.060208, 0, 0}. Now we need to calculate the L1. This is shown in the last equation in the list above. Basically, you just take the absolute value of every term in the vector (R2-R1) and add them all up. For our example from t=1 to t=2, we get delta equal to 0.36125. If I add this delta value to the last column of the convergence table you can see how this value gets smaller as we iterate.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-0XVQTZgRASs/VTKavSBCENI/AAAAAAAACNs/PzbhOkLb5aQ/s1600/Delta%2BAdded%2Bto%2BConvergence%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="328" src="https://3.bp.blogspot.com/-0XVQTZgRASs/VTKavSBCENI/AAAAAAAACNs/PzbhOkLb5aQ/s1600/Delta%2BAdded%2Bto%2BConvergence%2BTable.jpg" width="400" /></a></div><br />I promised earlier I'd show you how the equations to calculate page rank can be turned into a matrix equation. If you're familiar with linear algebra, this will be interesting to you. The other MAJOR advantage of having equations in matrix form is that the processing time in a computer is MUCH faster for matrix math; this is because ridiculously smart people have optimized matrix math routines in most math libraries so that they get the right answers while minimizing processing time. We want to be able to stand on the shoulders of these giants. So, here's how the equations can get morphed into matrix equations<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-X6KVHppchtU/VTKgihH_G0I/AAAAAAAACN8/e0Q1wCYn4ZM/s1600/Matrix%2Bmath%2Bfor%2BPageRank.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="404" src="https://2.bp.blogspot.com/-X6KVHppchtU/VTKgihH_G0I/AAAAAAAACN8/e0Q1wCYn4ZM/s1600/Matrix%2Bmath%2Bfor%2BPageRank.png" width="640" /></a></div>If you know how to use matrices in Matlab, a programming language, Excel, etc. This last equation can be used to efficiently calculate each iteration.<br /><br />Before I end this post, I need to explain one more thing. If our web diagram happened to look like the diagram below we would have a problem<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-iHNvLfArXxQ/VTKhvGITQqI/AAAAAAAACOI/S9jYkpB8UVg/s1600/PageRank%2BWeb%2BDiagram%2BSample%2B(Dead%2BEnd).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="296" src="https://4.bp.blogspot.com/-iHNvLfArXxQ/VTKhvGITQqI/AAAAAAAACOI/S9jYkpB8UVg/s1600/PageRank%2BWeb%2BDiagram%2BSample%2B(Dead%2BEnd).jpg" width="320" /></a></div><br />The only thing that I changed is that the arrow that was going from F to C, is now going from C to F. The reason why this is a problem is that F isn't pointing to anything. If we randomly select F as our starting point, how can we click on a link to move forward??? We can't. A simple way to solve this problem is for there to be an exception in the PageRank algorithm. If the page has no links to other pages, we don't allow the option to click on a link, we force the algorithm to randomly jump to another page. In essence, in this special case, we set alpha equal to 1. This keeps the algorithm from getting hung up in a dead end.<br /><br />Well, I think that wraps it up. Let me know if you have any additional questions about PageRank that I can answer for you. As always, I hope this helps somebody out there!<br /><br /></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-69926043013272877352015-04-11T11:32:00.002-04:002015-04-11T11:32:57.992-04:00Probabilistic Retrieval Model: Basics, Query Likelihood and SmoothingThis post discusses a different way (compared to the <a href="http://simpledatamining.blogspot.ca/2015/03/simple-text-retrieval-vector-space.html" target="_blank">vector space model</a>) to rank documents when performing text retrieval. This is called the probabilistic retrieval model. It bases its formulas off of probability theory instead of the rules of thumb that were created for the vector space model through trial and error. I'll explain the basics of the probability model, explains some limitations and derive a "smooting" implementation (Jelinek-Mercer), and then give an example of how it all works.<br /><br />If it has been a while since you've take statistics, or never have, I'm hoping to make this first section easy for you to follow. When it comes to retrieving the right document during a search we can think of the best documents as the ones that have the highest probability of being relevant to the searcher. The trick comes when we have to define that probability. The mathspeak version of this probability definition is written as "p(R=1|d,q)". Translated in to English that reads "the probability that R=1 (i.e. the document is relevant) given that we have document d and query q". If we can efficiently calculate this probability, we can rank the documents in our search by their probability of being relevant.<br /><br />The problem with the probability p(R=1|d,q) is that it is actually very hard, if not impossible, to calculate from the information we have when a user submits a query. So, the data mining community has developed a substitute probability to calculate that gives us basically the same effect. What they use is the probability that the query entered by the user was randomly generated from a relevant document. That probably didn't make sense yet, so let me explain what a unigram is and then give you an example. <br /><br />When you analyze text, you have to decide how the text will be divided up for analysis. If the user enters a query for "hard drive" are you going to treat "hard" and "drive" as 2 separate variables? Or, are you going to treat them as one? How do you know which one you should use? For the simplest models we treat each word as a different variable and assume that each word has nothing to do with the other words. This assumption is called statistical independence. It basically means that you're assuming that there is no correlation between the words in the query. This is obviously NOT true, but as it turns out, accepting this assumption actually gives pretty good results anyway so we'll go with it. So each word in the query gets a fancy name called a unigram (basically means 1 word). If you bunched words into groups of 2 they would be 2-grams, etc.<br /><br /><b><u>Query Likelihood Example</u></b><br />Now it's time for that example I promised. Suppose you entered the query <i>{hard drive test}</i> and<br />you're looking at document D4 = <i>{...as part of the factory acceptance, every unit gets a <b>hard drive test</b>...}. </i> This document contains each of the query words once. Think of the document D4 as a bag of marbles where each document word (unigram) is a different marble in the bag. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-bR09oumlGcM/VSgY17iWqgI/AAAAAAAACJ0/BlAr6l7lYWA/s1600/Bag%2Bof%2BMarbles.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-bR09oumlGcM/VSgY17iWqgI/AAAAAAAACJ0/BlAr6l7lYWA/s1600/Bag%2Bof%2BMarbles.jpg" height="200" width="173" /></a></div><br />Now randomly take a marble out, what's the probability that marble was one of the query words? It should be the number of times the word is in the document divided by the number of words in the document. If D4 is only 20 words long, then the probability of pulling out the word "hard" is 1/20. The probability of pulling out "drive" and "test" if also 1/20 for each term. If you're familiar with statistics, this is random sampling WITH replacement (put the marble back in the bag after you pick one), because the probability of pulling out "drive" isn't 1/19 after I've picked my first word. Now that we understand that we can predict the probability of generating the query with the document we're looking at; it's just the probability of pulling out each word multiplied by each other, or (1/20) x (1/20) x (1/20) = 1/8000. Using this methodology, you can look at all of the documents in the collection and do the same calculation; the highest ranked document will be the one that spits out the highest probability. If you want the mathspeak version of this probability it looks like this, "p(q|d,R=1)". This basically assumes that all of the documents are relevant (we know they're not) and gives the probability that they generated your query.<br /><br />Now that we have that understanding, what happens when a method like the one above runs into a document that doesn't contain one of the query words...think about it...the probability calculated will be 0. That's because if 1 query word is missing from the document, then one of the terms that get multiplied together will be something like (0/20) which equals 0. Multiplying anything by 0 gives you 0 so this seems to penalize the document WAY to much. What if this document had the phrase "hard drive examination" instead of "hard drive test". Would we really want to rank that document at the very bottom...I think not! There's a way to fix this problem, but I'm going to have to explain some formulas before I do that.<br /><br /><b><u>Query Likelihood Formulas</u></b><br />Some words are very rare in documents. If you happen to be searching for a rare term, then the probability of finding this term will be very small. This small probability will multiply with other fractions and then it's pretty likely that you'll end up with tiny values for your query probability. In a computer, when variable values get too close to 0 there is a growing risk that round off error in the computer will start to become significant and distort your results. To avoid this, the magical properties of the logarithm come to save the day. If you look at the chart below for the log(x) you will see that as x increases, the log(x) also increases (by the way I assume log base 10 here and through the rest of this post). It's not a linear increase, but when it comes to ranking things, that's OK. if X is greater than Y, then log(X) is greater than log(Y). Also notice that the chart spans values where X is 0 to 1, like probabilities are required to do.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-6R4V1K5BicM/VSWSr79jxdI/AAAAAAAACH4/K8b1cWcOUlc/s1600/log(x)%2Bbetween%2B0%2Band%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-6R4V1K5BicM/VSWSr79jxdI/AAAAAAAACH4/K8b1cWcOUlc/s1600/log(x)%2Bbetween%2B0%2Band%2B1.jpg" height="192" width="320" /></a></div><br />The other thing that is SO cool about the logarithm is that log (A*B) = log(A) + log(B). If we apply this rule to the probabilities we're adding together then for the example above (1/20) x (1/20) x (1/20) becomes log(1/20) + log(1/20) + log(1/20). It doesn't look very different here, but this minimizes the round-off problem in a computer. In mathspeak, if we have a lot of terms that get multiplied together we use a large pi symbol. The way we calculated the probability of a query being generated from a document above would look like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-aBF1FJi33iU/VSWYklBVWdI/AAAAAAAACIQ/ht1QSSfHMGk/s1600/product%2Bequation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-aBF1FJi33iU/VSWYklBVWdI/AAAAAAAACIQ/ht1QSSfHMGk/s1600/product%2Bequation.jpg" height="89" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>f(q,d) is just a ranking function that depends on the query, q, and the document, d. The fraction to the right is the count of a <b>w</b>ord (this is a <b>w</b>ord in the query) in a <b>d</b>ocument divided by the number of words in the <b>d</b>ocument. This fraction can also be written as p(w<span style="font-size: xx-small;">i</span>|d), or probability of a word given document, d. That BIG pi symbol means that you're going to multiply all of fractions to the right together from the first word (i=1) in the query to the last word in the query (there are n words in the query). If we take the logarithm of this formula, then we get this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-cWKVXlgEyYo/VSWduyUvjWI/AAAAAAAACIk/rApnTiAlnhM/s1600/Sum%2Blog%2Bequation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-cWKVXlgEyYo/VSWduyUvjWI/AAAAAAAACIk/rApnTiAlnhM/s1600/Sum%2Blog%2Bequation.jpg" height="176" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Both of these formulas are equivalent, but before your head explodes, calm down and we'll walk through each one slowly. For both of them, we still have f(q,d) as the ranking function. We don't have log(f(q,d)) because there's no reason to do this since we have proven that the order is maintained when we take the logarithm. In the top equation, we have the same fraction on the right, but we take the logarithm of this fraction. Instead of multiplying all of these fractions from i=1 to n, we add them all up. That's what the BIG sigma symbol means. Now the difference between the first line and the 2nd one is the subscript on the sigma symbol and the term c(w,<b>q</b>). The subscript on the sigma symbol means that we are going to sum over all of the words in the <b><u>v</u></b>olume (or words in the collection of documents). The reason we can do this without messing everything up is that we are multiplying each term by c(<b><u>w</u></b>,<b><u>q</u></b>) which is the count of the <b><u>w</u></b>ord in the <b><u>q</u></b>uery. So when 'w' in the summation equals "hard" then c(w,q)=1 and in effect we're saying this one counts/matters. If the 'w' in the summation equals "banana", that's not part of our query, so c(w,q)=0 and we add nothing to our ranking function. At this point in time, you may be thinking, why would anybody add that complexity to the equation? We'll see why this makes some of the notation easier to understand in upcoming sections<br /><br /><b><u>Language Model Smoothing</u></b><br />If we plot the probability of a word being selected in a document using the query likelihood model on one axis and the word number on a 2nd axis, we might get something that looks like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-SXRRDCVyk2I/VSWlOtHheQI/AAAAAAAACJM/n9gUsMfDZJo/s1600/Query%2BLikelihood%2Bgraph%2Brepresentation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-SXRRDCVyk2I/VSWlOtHheQI/AAAAAAAACJM/n9gUsMfDZJo/s1600/Query%2BLikelihood%2Bgraph%2Brepresentation.jpg" height="279" width="320" /></a></div><br />As described earlier, if the word doesn't exist in the document, then there is 0 probability that it can be picked. As described earlier, this is probably not desirable because there might be a document that matches very closely, but does not contain one of the words in the query. What would be better is if we could adjust the curve to a little to look more like the green line below<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-sryDvt7egpY/VSWl2dvEvMI/AAAAAAAACJY/0COVwtJgWEI/s1600/Smoothing%2Bgraph%2Brepresentation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-sryDvt7egpY/VSWl2dvEvMI/AAAAAAAACJY/0COVwtJgWEI/s1600/Smoothing%2Bgraph%2Brepresentation.jpg" height="279" width="320" /></a></div><br />Notice that near the end of the curve there are non-zero values for p(w|d). These non-zero values will help solve the problem of query terms that don't show up in a document. Since this green curve represents the average probabilities for all the words in the document collection, we wouldn't want to just use this curve for p(w|d). If we did, all of the documents would have the same score and the calculations would be pointless. What we really want is a way to kind of take a weighted average between the actual document we're ranking and the whole collection of documents. You can imagine that if we did this that the bumpy blue curve would become much smoother, thus the name "smoothing" for this approach. One method of doing this is called the Jelinek-Mercer(JM) model.<br /><br />To get us to the JM model we've got to go through a derivation first. I've tried to make this derivation as visual as possible. If you don't really care about how the equation is derived, you can just skip down a little bit. But, if you're feeling adventurous, here's what I've come up with to explain it.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-gr2OfuBW9Pc/VSke667RKCI/AAAAAAAAAAM/Sr0yCqs4xe0/s1600/General%2BSmooting%2BFunction%2BDerivation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-gr2OfuBW9Pc/VSke667RKCI/AAAAAAAAAAM/Sr0yCqs4xe0/s1600/General%2BSmooting%2BFunction%2BDerivation.jpg" height="585" width="640" /></a></div><br />The top equation is the one we have already explained. The next step down splits this equation into 2 parts that represent a weighting for the probability of words found in the document and a weighting for the probability of words found in the rest of the collection. You'll notice that the probability of words in the document turned into a weird looking Pseen. I'm going to ask you to just ignore this for now, we'll explain this more later. On the 3rd line we split the 2nd term on the 2nd line because the sum over all of the terms not found in the document is the same as taking all the words in the collection and subtracting the words found in the document. On the 4th line we use one of the properties of logarithm to split the alpha term out from the p(w|C) term. Once we've done all of this we combine and reorganize all the terms in the last line. The first term in the first line takes advantage of the fact that log(a)-log(b)=log(a/b). For the 2nd term, since alpha is a constant we can simplify the notation where |q| is the count of words in the query. The last term on the last line is just added to the end from the 4th line. <br /><br />Now that we have this equation on the last line above, we can adapt it to the JM method of smoothing. To do this we need to define this term circled in red<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-k037LC5GP6I/VSki1X-BteI/AAAAAAAAAAY/jgigaJtZTCc/s1600/Funky%2Bterm%2Bto%2Bdefine.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-k037LC5GP6I/VSki1X-BteI/AAAAAAAAAAY/jgigaJtZTCc/s1600/Funky%2Bterm%2Bto%2Bdefine.jpg" height="99" width="640" /></a></div><br />The JM method starts by defining how it wants to perform the weighting between the probabilities from the documents and the collections. Essentially it is giving a new definition for p(w|d), or the probability of a word given a document. Here's how this is defined in the JM method<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-R-EWO6xyFsc/VSkoET0KWbI/AAAAAAAAAAw/EvtbLYNXBuI/s1600/JM%2Bp(wd)%2Bdefinition.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-R-EWO6xyFsc/VSkoET0KWbI/AAAAAAAAAAw/EvtbLYNXBuI/s1600/JM%2Bp(wd)%2Bdefinition.jpg" height="51" width="320" /></a></div>Let's start by saying that lambda(<span style="background-color: white; font-family: sans-serif; line-height: 1.6;">λ</span>) is a user selected variable that ranges between 0 and 1 (it's a different way of defining alpha in earlier equations). The first term is the weighted probability of a word based on data from the document, and the 2nd term is the weighted probability of a word based on the collection of documents. The 2nd half of the first term where we have that fraction, we're just taking the count of the word in the document, divided by the count of words in the document. You can see that if we set lambda to a large value close to 1, that we will basically just be taking the probability of a word based on the document collection. With this one definition, we can now derive the rest of the JM ranking function.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-fKc4piZTetM/VSkx2AnbWGI/AAAAAAAAABY/y7EeL1I0nGU/s1600/JM%2BRanking%2BFunciton%2BFinal%2BDeriviation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-fKc4piZTetM/VSkx2AnbWGI/AAAAAAAAABY/y7EeL1I0nGU/s1600/JM%2BRanking%2BFunciton%2BFinal%2BDeriviation.jpg" height="234" width="640" /></a></div><br />If you're one of those people that don't care about derivations (it's OK, I used to be one of them) just look at the last equation line and use it. If not, here's a quick explanation. The first equation row just used the definition for Pseen to simplify the fraction a little bit. Notice that lamda gets substituted for alpha. Up until now we have been using alpha as our generic term that defines the weighting between the document and collection. Since the JM method defines this as lambda, we just swap alpha out for lambda. After obtaining a simplified fraction for that weird Pseen fraction term, we substitute it into the ranking function in the 2nd line. We also get to completely remove the last 2 terms of this ranking function because they're actually constant. The last term is a probability of a word in the collection and that will be the same for any document in the collection. The 2nd to last term is based on the number of words in the query, which is the same for every document we're trying to rank as well. So, there's no need to calculate these values if we're only interested in ranking documents. They get the X, and we end up with a simpler equation in the last line. The sum in this final equation is over all the words in the query and document<br /><br />Now that we have this equation, let's finally do a simple example with it to show how to use it. Let's say that we have the same query and documents used in the post about the <a href="http://simpledatamining.blogspot.ca/2015/03/simple-text-retrieval-vector-space.html">vector space model</a>:<br /><br />q = <i>{hard drive test}</i><br />D1 = <i>{...it's <b>hard</b> to determine...}; |D1|=365</i><br />D2 = <i>{...this <b>hard drive</b> has 100GB of memory...make sure the <b>drive</b> is fully installed...}</i><i>; |D2|=50</i><br />D3 = <i>{...before I bought my new car, I took it out for a <b>test drive</b>...}</i><i>; |D3|=75</i><br />D4 = <i>{...as part of the factory acceptance, every unit gets a <b>hard drive test</b>...}</i><i>; |D4|=50</i><br />D5 = <i>{...that was a <b>hard</b> <b>test</b>...a standardized <b>test</b> is design to be <b>hard</b> for...}</i><i>; |D5|=230</i><br /><br />To do all of the calculations we need to know the probability of finding the query words in the collection of documents. We take the count of the query words in all the documents and divide them by the total number of words in our collection. For example, the word "hard" shows up 5 times in our collection of documents, and there are 365+50+75+50+230=770 words in the collection. So the probability of "hard" in the collection is 5/770 = 0.006494 = p("hard"|collection). If we do the same for drive and test we get p("drive"|collection) = 0.005195 and p("test"|collection) = 0.005195.<br /><br />The only thing left before we rank some documents is picking a value for lambda. Let's just say that we use 0.5 for this example. For the first document we would get the following.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Bg7RQ-Hrxpw/VSk3m6o7IjI/AAAAAAAAABo/Qv2-1ttyA6M/s1600/Hard%2Bdrive%2Btest%2Bformula%2Bexample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Bg7RQ-Hrxpw/VSk3m6o7IjI/AAAAAAAAABo/Qv2-1ttyA6M/s1600/Hard%2Bdrive%2Btest%2Bformula%2Bexample.jpg" height="299" width="640" /></a></div><br />Notice that I only included 3 terms here because the other terms have values of c(w,q) that are equal to zero. This is because there are only three query terms. All of the other words in the collection aren't in the query so their value is zero. This is actually a big weakness for the JM method. It doesn't actually solve the problem where there are terms missing from our query. Instead is smooths out the probability of the terms that are in our query with the probability from the collection. To solve the problem where we don't include a term in our query, you have to use another method like Dirichlet Prior of BM25. These are examples of other smoothing methods that the data mining community has created. If we continue our example using the JM method we get the following ranking values for the documents in our collection:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-72BSCNAt1FM/VSk5B8QjBQI/AAAAAAAAAB0/pql08mv-iIQ/s1600/Final%2BJM%2BScores.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-72BSCNAt1FM/VSk5B8QjBQI/AAAAAAAAAB0/pql08mv-iIQ/s1600/Final%2BJM%2BScores.jpg" height="141" width="320" /></a></div>We can sort these scores from largest to smallest and output the documents to the user. That's basically how it all works. I think that wraps it up. If you have any other specific questions about this method, please say so in the comments and I'll see what I can do to augment this explanation to cover it. As always I hope this helped somebody out there!<br /><br /><br />Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-75193512412961956612015-04-04T18:40:00.000-04:002015-04-04T18:40:09.789-04:00Precision and Recall...and Other VariationsWhen evaluating the effectiveness of a search algorithm, or a classification problem, one the problems we have is how to measure this "effectiveness". This blog post will cover 2 basic measures for this effectiveness, precision and recall, along with some variations that stem from them.<br /><br />Let's say that you are using Google and you want to find some information about how to fix your kitchen sink when the garbage disposal stops working. You might go to www.google.com and type "fix garbage disposal in sink" into the search field. When I did this I got this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-hh_Epq0fhRM/VR_8NtX7MlI/AAAAAAAACEI/0pKJ5Q9PCU0/s1600/Google%2BResults.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-hh_Epq0fhRM/VR_8NtX7MlI/AAAAAAAACEI/0pKJ5Q9PCU0/s1600/Google%2BResults.jpg" height="320" width="283" /></a></div>The actual results in the image above don't matter much to our current discussion. The point is, if you're anything like me, sometimes you do a search like this and ~50% of the results are not what you're looking for. You end up looking through the first page or so, hoping to find what you're actually looking for, and then maybe give up, try different search terms, try again... To avoid that, we need a way to measure our search algorithms' effectiveness and report them so they can be improved. To do this, let's say that you perform a search and based on your judgement, this is how you would assess the results returned<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-NsYtbrMHrmo/VSAEiP2Fu1I/AAAAAAAACE4/e_lBgoE4Rl0/s1600/Sample%2Brelevant%2Bresults.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-NsYtbrMHrmo/VSAEiP2Fu1I/AAAAAAAACE4/e_lBgoE4Rl0/s1600/Sample%2Brelevant%2Bresults.jpg" /></a></div><br />Each result in this table is like another result in the ranked list that Google returns when you do a search. We'll use this data to explain precision and recall<br /><br /><b>Precision</b><br />Think of precision as answering the question, "how many of the results that I got are actually relevant?" This question is answered as a percent. So in the example above, we say that we have 4 relevant results out of 10, or 40% (0.4). When it comes to search algorithm performance, a standard rule of thumb is to just use the first 10 results to measure this. We'll be using this throughout this blog post.<br /><br /><b>Recall</b><br />Think of recall as answering the question, "How many of all of the relevant results did the search show me?" I haven't told you how many total relevant results there are for the example above yet though so we need more information. Let's say that in the whole database (could be the web), there were actually only 5 relevant documents (webpages, whatever,...). Once we know that we would be able to say that the recall in this example was 4 out of 5, or 80% (0.8).<br /><br /><b>Which one to use?</b><br />So would you rather use precision or recall? Think about it. Which question are you trying to answer? Do you want all of the results to be relevant, or do you want to make sure you return as many of the relevant results as possible? They're both desirable, but a little different.<br /><br />For most Google searches I perform, precision seems to be more important because I really only need to find 1 or 2 websites that answer my question; having 100 websites that answer my question doesn't add much value.<br /><br />If we're trying to do an exhaustive search for all of the documents that cover a certain topic (e.g. a patent search, or information search for a PhD dissertation), then recall seems to be what we care about.<br /><br />In most applications there is a desirable balance between precision and recall. The easiest way to calculate a balance would be to just take the average. What happens when we do this? In the example above we would get 0.60 which <u>seems</u> to make sense. What if we have a different example where there are 2 relevant documents, but the results look like this<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-wPwZOPEhi_I/VSAEkOSUHVI/AAAAAAAACFA/u3sa2BXwDa0/s1600/2nd%2Bsample%2Brelevant%2Bresults.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-wPwZOPEhi_I/VSAEkOSUHVI/AAAAAAAACFA/u3sa2BXwDa0/s1600/2nd%2Bsample%2Brelevant%2Bresults.jpg" /></a></div><br />In this case the precision is 2/10, or 0.2. The recall is 2/2, or 1.0. If we take the simple average of precision and recall, we get 0.6 again. This doesn't make sense because even though the precision was much worse, the recall of 100% balanced it out. What we really want (instead of using a simple mean) is a way to get an average that is only high when both are high, and can get low when either precision or recall get low. One special way of doing this is called F1. The F1 formula looks like this<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-kO8syVmLOkI/VSAGaPaDOPI/AAAAAAAACFM/c1slT4wB-EE/s1600/F1%2BFormula.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-kO8syVmLOkI/VSAGaPaDOPI/AAAAAAAACFM/c1slT4wB-EE/s1600/F1%2BFormula.jpg" height="71" width="200" /></a></div><br />This formula is pretty handy. Since precision and recall are always going to be values between 0 and 1, the numerator will always be between 0 and 2 (2 times 0, or 2 times 1). The denominator will also always be between 0 and 2 (0+0, or 1+1). The numerator will also always be smaller than the denominator; this gives us F1 values that are always between 0 and 1. I want to show you some examples so you get a feeling for how this works. The table below shows a wide array of F1 values based on different values of precision and recall<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-AAymbpK67SA/VSArGlo9UXI/AAAAAAAACFo/TvWg6iXlJes/s1600/F1%2Bvalue%2Bexamples.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-AAymbpK67SA/VSArGlo9UXI/AAAAAAAACFo/TvWg6iXlJes/s1600/F1%2Bvalue%2Bexamples.jpg" height="245" width="400" /></a></div><br />I used some conditional formatting in Excel to make this table a little more visual. Red values are lower and green values higher. You can see that there is strong preference for instances when both precision and recall are high, and a penalty when either one of them is too low.<br /><br /><b>F-Measure (general)</b><br />As mentioned above there may be times when you want your search engine to favour precision over recall, or vice versa. To deal with this problem there is a general form of the F1 formula that allows the user to adjust a value, Beta (<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; font-style: italic; line-height: 22.3999996185303px;">β</span>), in order to give more weight to precision or recall. This general form looks like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-FDfd15iFQMc/VSAsp9HQLtI/AAAAAAAACF0/LuQQdNYZIUA/s1600/F_beta%2BFormula.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-FDfd15iFQMc/VSAsp9HQLtI/AAAAAAAACF0/LuQQdNYZIUA/s1600/F_beta%2BFormula.jpg" height="83" width="200" /></a></div><br />If you set beta equal to 1, then you'll find that this formula simplifies to the F1 formula I already showed you earlier. It may not be intuitive for people out there how you should change the value of beta to favour precision or recall, so I thought that I would create a small chart that would give you some examples of how the F value changes with different values of beta.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-13vBKbbAcag/VSAtZ00k0bI/AAAAAAAACF8/fSCgr27LegY/s1600/Beta%2BVaries%2BChart.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-13vBKbbAcag/VSAtZ00k0bI/AAAAAAAACF8/fSCgr27LegY/s1600/Beta%2BVaries%2BChart.jpg" /></a></div><br />First thing you need to notice is that the horizontal axis on this chart is in a logarithmic scale. This should help you understand that if you really want to change the way the F-score favours precision or recall you need to think orders of magnitude, not small changes. I created the red and blue lines in the chart to show that switching the values of precision and recall doesn't create symmetric lines. That's OK though because when you use an F score, you're really only trying to figure out if the results are better than another; relative comparison. <br /><br />Let's use this F-score chart to think through a theoretical example. Say that the chart above represents the results from one query for 4 different versions of a search engine I created. If that were true, I could then use the chart above to estimate a value of beta that seems to match the way I want the results to come out. Suppose that I think that the results from the purple version of the search engine are the best for what I'm looking for, then the green one, then the blue one, then the red one. In that scenario, I would need to pick a value of beta somewhere between 0.4 and 1 (remember it's a log scale so that cross-over point between the blue and green line is difficult to estimate visually).<br /><br /><b>Average Precision</b><br />Another way to try to combine the effects of precision and recall is Average Precision. To visualize this approach we'll plot what is called the precision recall curve. To create this curve, we have to figure out what the precision and recall is as you progressively work down the results from the query. If we do this for the 2 examples I used at the beginning of this post you'll get something like this.<br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-5Doba7c2uW8/VSA1sefgDeI/AAAAAAAACGM/LybIpIahFCw/s1600/Precision%2BRecall%2BTables%2Bfor%2BCurve.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5Doba7c2uW8/VSA1sefgDeI/AAAAAAAACGM/LybIpIahFCw/s1600/Precision%2BRecall%2BTables%2Bfor%2BCurve.jpg" height="267" width="400" /></a></div><br />It's important to see that the precision is calculated as you go. So for the first line in the first example, the precision is 1, because it's the first result and it's relevant. The second line has a precision of 0.5 because we still have the relevant result from the first line, but the 2nd one isn't relevant and we've only looked at 2 results (1/2). When you construct a precision recall curve, you only care about the points where you find a relevant result (green dots below). The curves for the 2 examples above look like this.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-UlhPLU5kY4Y/VSBjLzwPKDI/AAAAAAAACHQ/i_EMsA5F-8Y/s1600/Example%2B1%2BPR%2Bcurve.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-UlhPLU5kY4Y/VSBjLzwPKDI/AAAAAAAACHQ/i_EMsA5F-8Y/s1600/Example%2B1%2BPR%2Bcurve.jpg" height="294" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Eij6Y35mIEg/VSBjLz9OgsI/AAAAAAAACHU/6AxWiIWkHNA/s1600/Example%2B2%2BPR%2Bcurve.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Eij6Y35mIEg/VSBjLz9OgsI/AAAAAAAACHU/6AxWiIWkHNA/s1600/Example%2B2%2BPR%2Bcurve.jpg" height="294" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><br /></div>When you look at these 2 curves you can see that the results returned aren't perfect. If they were, there would be a string of green dots at precision 1.0 along the top, and then there would be a drop off at the end once all of the relevant results were returned. To compare against this ideal state, we can do something analogous to taking the area under the curves to measure how well the search performed. The formula for this is the sum of the precision at all of the green dots divided by the total number of relevant documents/results that could have been found (NOT the number of results returned in the query). For the first example that's (1 + 0.5 + 0.6 + 0.5) / 5 = 0.52. If you work through the 2nd example on your own you'll see that the average precision is 0.156 (much worse, and that shows in the curve). This 2nd example is so much worse because although the search returned all of the relevant results they showed up at the bottom of the list. This average precision metric does a great job of penalizing ranked lists if the relevant documents aren't near the top of the list.<br /><br />These are the methods and variants of calculating precision and recall that I learned from professor ChengXiang Zhai's "Text Retrieval and Search Engines" Coursera course. There are other ways of measuring the effectiveness of a search algorithm that were presented in that course as well. Perhaps I'll write a post about that some other time in the future.<br /><br />As always, I hope this helped somebody out there understand these topics more clearly.!<br /><br />Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-54819921543872197462015-03-28T11:08:00.002-04:002015-03-28T11:11:12.273-04:00Simple Text Retrieval Vector Space Model ExplanationThis post is going to be the beginning of my coverage of the content in the <a href="https://www.coursera.org/course/textretrieval" target="_blank">Text Retrieval and Search Engines</a> course taught by Chengxiang Zhai at the University of Illinois through Coursera. I'm going to review the basic vector space model for text retrieval, and hint at some of the evolutions of this model and why they're important.<br /><br /><b><u>Vector Space Model</u></b><br />I'm not sure how many of you out there took linear algebra courses, or know much about vectors, but let's discuss this briefly, otherwise you'll be completely lost. Vector space doesn't look like outer space, it looks more like this if you look at a simple 2-dimensional vector.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-L6c-hZYUt-E/VRauGiNhIGI/AAAAAAAACCE/8hN-gdJmBzs/s1600/2D%2BVector%2BSpace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-L6c-hZYUt-E/VRauGiNhIGI/AAAAAAAACCE/8hN-gdJmBzs/s1600/2D%2BVector%2BSpace.jpg" height="200" width="194" /></a></div><br />The vector here starts at the origin of the graph and then extends out to the point (1,1). In text retrieval, the 2 axes of the graph might represent 2 search terms you're looking for (think googling "hard drive"). The vector on the graph would represent the query itself. Here, the fact that the vector points to (1,1) means that the query contains 1 instance of the term "hard" and one instance of the term "drive and this could also be represented as <i>{hard, drive}</i>. On the same graph, we can add documents that are in our database (could also be the world wide web). there are a couple ways to do this, but we'll start simple. The easiest way to do this is just search through the document(s) and determine if the search terms are present in the document. If they are, assign that term a 1, if not assign it a 0. This is called the bit vector approach. Assume that the following represents 2 documents with only the relevant text included.<br /><br /> D1 = <i>{...it's <b>hard</b> to determine....}</i><br /><i> </i>D2 = <i>{...this <b>hard drive</b> has 100GB of memory...make sure the <b>drive</b> is fully installed...}</i><br /><i><br /></i>Using the 1 and 0 rule explained above, I would assign D1 the vector (1,0) and D2 the vector (1,1). In this version of text retrieval these vectors are called bit vectors. Plotting these on the vector space graph we get this.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-h4xgISUh0tQ/VRau3D6vRhI/AAAAAAAACCM/0Sqyi2qQ1ak/s1600/2D%2BVector%2Bspace%2Bwith%2Bquery%2B%26%2Bdocuments.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-h4xgISUh0tQ/VRau3D6vRhI/AAAAAAAACCM/0Sqyi2qQ1ak/s1600/2D%2BVector%2Bspace%2Bwith%2Bquery%2B%26%2Bdocuments.jpg" /></a></div><br />'q' here represents our "query". You can see that D2 matches the query perfectly using this system. D1 seems to be going in a different direction. Data scientists use this "closeness"of the vectors to measure similarity between the search terms and the documents reviewed. To do this, they use the dot product. A dot product is just a fancy way of saying you multiply the terms together and then add up the sum. If you're familiar with MS Excel, this is like using the sumproduct() function. For our example above, the dot product between the query and D1 is 1x1+1x0=1. For D2 we get 1x1+1x1=2. The dot product value can be used the rank the documents. "Closeness" is ranked like a basketball score, the higher the better, so you can see that D2 ranks higher than D1. <br /><br />Now I want to step back and explain the dot product a little more so you can get some intuition behind why the dot product works well for measuring the similarity of vectors. If you look up the definition of a dot product on Wikipedia you'll find that the dot product (A<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px; white-space: nowrap;">⋅</span>B) can also be calculated this way A<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px; white-space: nowrap;">⋅</span>B = ||A|| ||B|| cos<span style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">θ</span>. The ||A|| represents the length of the vector A. if you remember back to trigonometry. The length of the hypotenuse of a right angle is calculated like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-dFrQH3nwP3I/VRawTToDc9I/AAAAAAAACCY/XROWiIUZEss/s1600/Length%2Bof%2BHypotenuse.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-dFrQH3nwP3I/VRawTToDc9I/AAAAAAAACCY/XROWiIUZEss/s1600/Length%2Bof%2BHypotenuse.jpg" /></a></div><br />If you use this thinking and apply it to vector space, you can see that for the query and the document we have right angle triangles and we can calculate the length of the vectors using this method. When we do this we get ||D1||=1, and ||D2||=||q||=1.414... where ||q|| represents the length of the query vector. Just because the lengths are equal doesn't mean that the vectors are the same. That is why the dot product definition includes the cos<span style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">θ </span>term. If the value of theta (<span style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">θ</span>) is close to 0, then cosine is close to 1; as theta gets closer to 90 degrees, cosine gets closer to 0. So, if you have two vectors that are pointed in almost the same direction, then the cosine between them will be ~1. This allows the ||A|| ||B|| to add to the relevance score. If the two vectors are 90 degrees apart from each other the ||A|| ||B|| will get multiplied by something close to 0, essentially eliminating the effect. <br /><br />Now that we have that simple understanding of vectors and dot products lets look at this in 3 dimensions. In the diagram below we have a 3 word query that creates 3 dimensions to compare documents against. Here we're assuming the query is {hard, drive, test}. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-9Wo_nCUczWc/VRayNZMdshI/AAAAAAAACCk/Q28Pmg7CQDs/s1600/3D%2Bquery%2Bin%2Bvector%2Bspace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-9Wo_nCUczWc/VRayNZMdshI/AAAAAAAACCk/Q28Pmg7CQDs/s1600/3D%2Bquery%2Bin%2Bvector%2Bspace.jpg" /></a></div><br />If we look at several documents, we might have documents that contain the following text:<br />D1 = <i>{...it's <b>hard</b> to determine...}</i><br />D2 = <i>{...this <b>hard drive</b> has 100GB of memory...make sure the <b>drive</b> is fully installed...}</i><br />D3 = <i>{...before I bought my new car, I took it out for a <b>test drive</b>...}</i><br />D4 = <i>{...as part of the factory acceptance, every unit gets a <b>hard drive test</b>...}</i><br />D5 = <i>{...I think I failed, that was a <b>hard</b> <b>test</b>...a standardized <b>test</b> is design to be <b>hard</b> for...}</i><br /><br />When I put these documents in the 3 dimensional vector space diagram it looks like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-qNCNHwGEVy0/VRa0q22NwCI/AAAAAAAACCw/2rZDXL36WjU/s1600/3D%2Bquery%2Bwith%2Bdocuments%2Bin%2Bvector%2Bspace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-qNCNHwGEVy0/VRa0q22NwCI/AAAAAAAACCw/2rZDXL36WjU/s1600/3D%2Bquery%2Bwith%2Bdocuments%2Bin%2Bvector%2Bspace.jpg" /></a></div><br />You can see the angles between the query vector and the document vectors and get a feel for which documents match closely. The actual dot product calculations for these query/document comparisons turn out like this using the binary bit vector approach described earlier.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-6s7h0VSWbCc/VRa56aCr5RI/AAAAAAAACDI/fnKxCkmzCx4/s1600/Dot%2Bproduct%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-6s7h0VSWbCc/VRa56aCr5RI/AAAAAAAACDI/fnKxCkmzCx4/s1600/Dot%2Bproduct%2BTable.jpg" height="165" width="400" /></a></div><br />Based on the dot product values, it's obvious that D4 is the best match. You can also see this in the last column that gives you the angle between the vectors. To calculate this angle I had to calculate the length of the vectors. This is VERY similar to doing it in 2 dimensions. 3D length is calculated like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Cfrm09JJoyQ/VRa-jr2DopI/AAAAAAAACDo/Va0FR_lhxaA/s1600/Vector%2Blength%2Bin%2B3D%2Bspace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-Cfrm09JJoyQ/VRa-jr2DopI/AAAAAAAACDo/Va0FR_lhxaA/s1600/Vector%2Blength%2Bin%2B3D%2Bspace.jpg" /></a></div>A, B, and C are perpendicular lengths in the vector space (distance along hard, test, and drive dimensions). You just square them all and take the square root. This same principle applies to higher dimensions (e.g. the length of a 4 dimensional vector is the square root of sum of the 4 squared terms/dimensions).<br /><br /><b><u>Term Frequency</u></b><br />The bit vector (1's and 0's) approach is a very simplistic model however, and there are times when this approach doesn't give you very good results. Let's the do the same problem over again with what is called the term frequency approach. This method counts the frequency of each of the terms found in the query, instead of just determining if the term exists in the document or not. If we apply this rule to the problem above, the vector space looks like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-dCCMthFoeYY/VRa8LOMzEZI/AAAAAAAACDU/iSGqvXG_cKQ/s1600/Term%2BFrequency%2BVector%2BSpace.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-dCCMthFoeYY/VRa8LOMzEZI/AAAAAAAACDU/iSGqvXG_cKQ/s1600/Term%2BFrequency%2BVector%2BSpace.jpg" /></a></div><br />And the overall dot product calculation table looks like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-nNxYG7djCLY/VRa8akwCt6I/AAAAAAAACDc/DK3K6dg7oPs/s1600/Term%2BFrequency%2BDot%2BProduct%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-nNxYG7djCLY/VRa8akwCt6I/AAAAAAAACDc/DK3K6dg7oPs/s1600/Term%2BFrequency%2BDot%2BProduct%2BTable.jpg" height="166" width="400" /></a></div><br />In bit vector mode, D2, D3, and D5 were all considered equally relevant. Using term frequency we are able to say that D5 is a better match than D2, and D2 is a better match than D3. The problem though is that now D5 looks like it's a better match than D4, which doesn't make sense. One method to deal with this problem is to discount the score impact of terms that occur frequently in documents. In our case, if we discounted "hard", or "test", we might find that D4 comes out on top again. This method is called inverse document frequency and I'll write another post about that later. When I do, the link to it will be HERE.<br /><br />As always, I hope this post helps somebody out there.<br /><br /><br /><br />Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-3801833616799814362015-03-21T15:51:00.000-04:002018-05-18T14:57:02.620-04:00Graph Pattern Mining (gSpan) - Introduction<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><div dir="ltr" style="text-align: left;" trbidi="on">For those of you out there that follow this blog fairly regulary, you may have noticed that there has been a lull in the amount of posts recently. This is not because I have not been working on my next post, it is because it has taken me quite some time to understand the topic described in this post. That being said, I'm assuming (yes I know what happens when people assume) that for many of you out there, this topic would drain several precious hours of your life to understand as well. So, I'm going to share what I've learned about the gSpan algorithm. In this post I will lay the foundation by describing how data scientists create some sort of order out of all of the different possible graph configurations that can exist. In my gSpan algorithm post, I'll describe how the information presented in this post is used to find frequent graph patterns. The information in this post is based on "gSpan: Graph-Based Substructure Pattern Mining" by Xifeng Yan and Jiawei Han, September 3, 2002.<br /><br /><b><u>Graphs and why they're tricky to pattern mine</u></b><br />First of all, let's start SUPER simple. What do people mean when they talk about finding patterns in graphs? Let's just say that they <b><u>aren't</u></b> talking about something like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-sLHvTs1p0NM/VQolfiuGrqI/AAAAAAAAB-Y/6aM1qyILJLo/s1600/NOT%2Bthis%2Bgraph.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="https://4.bp.blogspot.com/-sLHvTs1p0NM/VQolfiuGrqI/AAAAAAAAB-Y/6aM1qyILJLo/s1600/NOT%2Bthis%2Bgraph.jpg" width="320" /></a></div><br />They are actually talking about some sort of visual representation of something, like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-G_LdpqObfWU/VQol0l00QCI/AAAAAAAAB-g/_7H-pFn8bBY/s1600/Graph%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-G_LdpqObfWU/VQol0l00QCI/AAAAAAAAB-g/_7H-pFn8bBY/s1600/Graph%2B1.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">I have no idea what the graph above represents...I totally made it up. But, it could look like a chemical composition diagram to some of you out there, or maybe a social network diagram, a family history tree...whatever. Essentially, graphs are systems that have nodes (the A, B and C in the picture above) and connections between them (solid, dashed and double lines above). In graph mining terminology they call a node a vertex and a connection an edge. If you ask me why, I have no idea, but that random knowledge might help you if you ever decide to pick up a technical paper on the subject.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now, if you have followed some of my previous posts on pattern mining (<a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori Basket Analysis</a>, or <a href="http://simpledatamining.blogspot.ca/2015/03/generalized-sequential-pattern-gsp.html" target="_blank">Generalized Sequential Pattern Mining</a>) you might look at that graph above and ask yourself, where do I start with that thing? This is a major question that have vexed many mathematicians and data scientists. Let's just start by stating the fact that, in general, when we're dealing with graphs, we don't really care about the orientation of the graph, or the how it "looks" per se; we really only care about what vertices are connected to each other and how. As an example here's that first graph again, with some other equivalent graphs. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Bs4cfvj9aG0/VQooS2G_8HI/AAAAAAAAB-0/9niI-SWDkZQ/s1600/Graph%2BIsomorphism%2BExamples.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="https://1.bp.blogspot.com/-Bs4cfvj9aG0/VQooS2G_8HI/AAAAAAAAB-0/9niI-SWDkZQ/s1600/Graph%2BIsomorphism%2BExamples.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div>You'll notice the term "Isomorphism" in that picture. That's just a fancy way of saying those graphs are essentially the same as graph 1 if you only consider the vertices and edges, but rearranged to look different. Since what we really want, is to be able to look for sub-graphs (or small subsets of the graphs in our database) and see if they are <a href="http://simpledatamining.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">frequent</a>, we would love to be able to create some sort of order for the graphs so we could kind of treat them like a <a href="http://simpledatamining.blogspot.ca/2015/03/generalized-sequential-pattern-gsp.html" target="_blank">sequential pattern</a>. In sequential pattern mining we represent patterns something like this <A,(B,C)D,E,(E,F)>. There's a chronological order there: first A, then B and C, then D, then E, then E and F. But with a graph, where do you start??? For example, I can number the vertices of graph 1 several different ways<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-k_2sN7pkOz0/VQ2-IPMbKXI/AAAAAAAACBk/4dKsCmQGebw/s1600/Vertex%2BNumbering%2BExamples.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="144" src="https://1.bp.blogspot.com/-k_2sN7pkOz0/VQ2-IPMbKXI/AAAAAAAACBk/4dKsCmQGebw/s1600/Vertex%2BNumbering%2BExamples.jpg" width="640" /></a></div><br />First of all, make a note that the numbering starts with the number 0; this is referred to as the root. Two of the variations above start at the C at the top (but is that really the "top"?) One starts at the A in the lower left corner, and the last one starts at the B. The order that the nodes are numbered can create a LOT of numbered graphs, but they're all different versions of the same thing. What a data miner really needs is a way to create some sort of sequential order so that we always number graph nodes/vertices the same way and therefore don't waste a whole bunch of time dealing with graphs that might be numbered differently, but represent the same thing we already have. Since, I'm focusing on the gSpan algorithm, I'm going to describe how DFS codes solve this problem.<br /><br />DFS in DFS code stands for <u>d</u>epth <u>f</u>irst <u>s</u>earch. When we get into the gSpan algorithm, the reason for this will become more apparent, but for now just go with it. A DFS code is just a way of documenting the vertices and edges of a graph in tabular form, but with special rules. We'll represent each edge by defining 5 things. Do do this, first we need to create codes for the features in our graphs. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-d0-IpThSkGo/VQrWLlH07rI/AAAAAAAAB_g/o28UEayjqV0/s1600/Graph%2BLegend.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="157" src="https://3.bp.blogspot.com/-d0-IpThSkGo/VQrWLlH07rI/AAAAAAAAB_g/o28UEayjqV0/s1600/Graph%2BLegend.jpg" width="320" /></a></div><br />With this coding system, we can turn the green edge in the 1st graph above into (0,1,C,c,B). The 0 is for the starting vertex; the 1 is for the ending vertex; 'C' is for the starting vertex label/type; 'c' is for the edge label/type; and 'B' is for the ending vertex label/type. The reason for having a code for both the vertex number and the vertex label is that if you're analysing a graph for frequent patterns, the vertex number doesn't give you much information about patterns, but we need it to keep track of where we are in the graph. If you finish coding all of the edges in that first graph you might come up with something that looks like this.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-j_sM_fyhpfg/VQyHAZ2ehnI/AAAAAAAACAU/TCfy8AMGgoM/s1600/Naive%2BDFS%2BCoding%2BExample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-j_sM_fyhpfg/VQyHAZ2ehnI/AAAAAAAACAU/TCfy8AMGgoM/s1600/Naive%2BDFS%2BCoding%2BExample.jpg" /></a></div><br />Since you don't have any rules at this point to guide you, you could have just as easily come up with DFS codes that look like any of the DFS codes below...or more<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-evdB-nhP7lo/VQyHATfG1uI/AAAAAAAACAM/wv4KzCEWi7w/s1600/More%2BNaive%2BDFS%2BExamples.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="176" src="https://1.bp.blogspot.com/-evdB-nhP7lo/VQyHATfG1uI/AAAAAAAACAM/wv4KzCEWi7w/s1600/More%2BNaive%2BDFS%2BExamples.jpg" width="400" /></a></div><br />All of these examples start from 0 in the lower left corner, but are very different. It becomes obvious that we need to come up with a standardized way to order our graphs for several reasons; (1) so that we don't drive ourselves crazy comparing codes like the ones above only to realize they're the same graph, (2) to make is easier to find a subgraph we might think is frequent in many different graphs that will likely have very different configurations. One of the things required to solve these problems is a set of rules called "Neighborhood Restriction"that limit the ways that we can create these lists of connections in our graphs. These rules govern the order that you add edges to the list when you're creating codes for a graph. The simplified English version of the rules listed in the technical paper go something like this.<br /><br /><br /><ul><li><i>If the first vertex of the current edge is less than the 2nd vertex of the current edge (forward edge)</i></li><ul><li><i>If the first vertex of the next edge is less than the 2nd vertex of the next edge (forward edge)</i></li><ul><li><i>If the first vertex of the next edge is less than or equal to the 2nd vertex of the current edge</i></li><li><i>AND If the 2nd vertex of the next edge is equal to the 2nd vertex of the current edge plus one <u>this is an acceptable next edge</u></i></li><li><i>Otherwise <u>the next edge being considered isn't valid</u></i></li></ul><li><i>Otherwise (next edge is a backward edge)</i></li><ul><li><i>If the first vertex of the next edge is equal to the 2nd vertex of the current edge</i></li><li><i>AND If the 2nd vertex of the next edge is less than the 1st vertex of the current edge <u>this is an acceptable next edge</u></i></li><li><i>Otherwise <u>the next edge being considered isn't valid</u></i></li></ul></ul><li><i>Otherwise (the current edge is a backward edge</i></li><ul><li><i>If the first vertex of the next edge is less than the 2nd vertex of the next edge (forward edge)</i></li><ul><li><i>If the first vertex of the next edge is less than or equal to the 1st vertex of the current edge</i></li><li><i>AND If the 2nd vertex of the next edge is equal to the 1st vertex of the current edge plus one <u>this is an acceptable next edge</u></i></li><li><i>Otherwise <u>the next edge being considered isn't valid</u></i></li></ul><li><i>Otherwise (next edge is a backward edge)</i></li><ul><li><i>If the first vertex of the next edge is equal to the 1st vertex of the current edge</i></li><li><i>AND If the 2nd vertex of the current edge is less than the 2nd vertex of the next edge <u>this is an acceptable next edge</u></i></li><li><i>Otherwise <u>the next edge being considered isn't valid</u></i></li></ul></ul></ul><div>I know that's hard to follow, I'll walk you through one and give you the answers to a couple more of these. It just so happens that Example 1 above meets all of these criteria. So, for (0,1,C,c,B) to (1,2,B,a,A) we see that 0,1 is a forward edge, and 1,2 is a forward edge. so we need to make sure that the 1 in (1,2,B,a,A) is less than or equal to the 1 in (0,1,C,c,B)...check; AND, we need to make sure that the 2 in (1,2,B,a,A) is equal to the 1 in (0,1,C,c,B) +1; 2 = 1+1...check. so the step from edge 0 to edge 1 is valid.</div><div><br /></div><div>Let's do another one. In example 1 still, (1,2,B,a,A) is a forward edge because 1 is less than 2 and (2,0,A,b,C) is a backward edge because 2 is greater than 0. So we check if the 2 in (2,0,A,b,C) is equal to the 2 in (1,2,B,a,A)...check; AND we check if the 0 in (2,0,A,b,C) is less than the 1 in (1,2,B,a,A)...check.</div><div><br /></div><div>If you keep going through Example 1, and the rest of the examples above, you'll see how hard it might be to create this type of order at random<br /><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-xSuQHYEXSlU/VQ2xnprUdqI/AAAAAAAACBU/0a319l3verE/s1600/DFS%2BCode%2BNeighborhood%2BRestriction%2BInspection.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="173" src="https://4.bp.blogspot.com/-xSuQHYEXSlU/VQ2xnprUdqI/AAAAAAAACBU/0a319l3verE/s1600/DFS%2BCode%2BNeighborhood%2BRestriction%2BInspection.jpg" width="400" /></a></div><div><br /></div><div>I created example 1 to fit the rules on purpose. Examples 2, 3 and 4 were created at random. I was surprised to see how many edge steps I accidentally got right for those 3.</div><div><br /></div><div>If you take some time and are careful, you can create several different DFS codes like example 1 that meet the neighborhood restriction rules, but are different from example 1. Below are 3 examples (1, 5, and 6) that meet the rules. To get these 2 other examples, I started at different locations in the graph. The easiest way to get these patterns (and the method that is actually used in gSpan) is to do the following: First you take a step forward, then try to make any backward connections possible connecting from smallest to largest (e.g. 3,0 comes before 3,1 if applicable). Then, you take another step forward (e.g. 3,4) and repeat until you're done with the graph.<br /><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-O_jMG_py6wg/VQ2_MnoDlCI/AAAAAAAACBw/jw6uKqNXhd0/s1600/DFS%2Bcodes%2Bthat%2Bneed%2Bneighborhood%2Breq.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-O_jMG_py6wg/VQ2_MnoDlCI/AAAAAAAACBw/jw6uKqNXhd0/s1600/DFS%2Bcodes%2Bthat%2Bneed%2Bneighborhood%2Breq.jpg" /></a></div><br /></div><div>So, it appears we've gotten closer to figuring out the 1 way we should represent a graph, but we're not quite there. To fix this problem the creators of the gSpan algorithm have an ordering system that will allow us to figure out if example 1 is "less than" example 5, or if example 5 is "less than" example 6. This ordering system is great because once we have this, we just define THE representation of the graph (the one we'll be using in the gSpan algorithm) as the minimum DFS code possible. This order is called DFS Lexicographic Order. You may remember from other posts that lexicographic is just a fancy way of saying the order is going from smallest to largest (e.g. 1,2,3 or A, B, C...). To do this we also make use of the fact that we have given codes to our vertex types and edge types. Remember how the blue circles in our examples are 'A', and the dotted green lines are 'c'? 'A' is a vertex label and 'c' is an edge label. Here's the rules that govern this order that will help us get our "minimum" DFS code; I'm going to write it in a very simplified pseudo-code format.</div><div><br /></div><div><i>Let X be the one version of the graph's DFS code Y be another version for the same graph</i></div><div><i>Assume that X <u>></u> Y to start</i><br /><i>Start by comparing the first edge (edge 0) of both DFS codes</i><br /><i>Start a loop for comparisons going through all the edges</i></div><i> </i><br /><i> Check each of the following rules; if one applies, set X < Y and exit the loop</i><br /><br /><ol><li><i>Is X a backward edge and Y a forward edge?</i></li><li><i>Is X a backward edge, Y a backward edge, and the 2nd vertex of X < 2nd vertex of Y?</i></li><li><i>Is X a backward edge, Y a backward edge, 2nd vertex of X = 2nd vertex of Y, and the edge label of X is less than the edge label of Y?</i></li><li><i>Is X a forward edge, Y a forward edge, the 1st vertex of Y < the 1st vertex of X</i></li><li><i>Is X a forward edge, Y a forward edge, the 1st vertex of X = the 1st vertex of Y, and the label for the first vertex of X is less than the label for the first vertex of Y?</i></li><li><i>Is X a forward edge, Y a forward edge, the 1st vertex of X = the 1st vertex of Y, the first vertex label of X = the first vertex label of Y, and the edge label for X is less than the edge label for Y?</i></li><li><i>Is X a forward edge, Y a forward edge, the 1st vertex of X = the 1st vertex of Y, the first vertex label of X = the first vertex label of Y, the edge label for X = the edge label for Y, and the 2nd vertex label of X < the 2nd vertex label of Y?</i></li></ol><br /><i> If you've made it to this section of code, you haven't proven that A is less than B yet</i><br /><i> If there's another edge left in the graph to check, increment up one edge (e.g. go from edge 0 to 1)</i><br /><i> </i><br /><i>End Loop</i><br /><i><br /></i>Again, like a lot of things in data mining, this might look complex, but when you see it in a couple of examples it is pretty easy to follow. If we apply this to Example 1, 5, and 6 above you'll see what I mean. <br /><br />Let's let example 1 be DFS code X and example 5 be DFS code Y in the code above. For the first edge, they are both forward edges with 0,1. So, rules 1 through 4 in the code don't apply. When we look at rule 5, we see that the 'C' in (0,1,C,c,B) is "greater than" the 'A' in (0,1,A,a,B) so rule 5 doesn't apply either. Because the first vertex label 'C' and 'A' aren't equal, rules 6 & 7 don't apply. So, comparing the first edge (edge 0) didn't give us the answer. We move on to the next edge (edge 1). For X we have (1,2,B,a,A) and for Y we have (1,2,B,c,C). The first two items match and they're forward edges so we jump all the way down to rule 5. We see that the first vertex labels match too so we go to rule 6. When we look at rule 6 we see that the edge 'a' is "less than" the edge 'c' so we get to say that X<Y, or Example 1 is "less than" Example 5. We don't have to check the rest of the edges because we break out of the loop once we find just one example of one of the 7 rules being met.<br /><br />Let's compare example 5 (X) and example 6 (Y). Starting at edge 0, we see that they're both forward edges with (0,1,...), so rules 1-4 don't apply. When we look at rule 5 we see that the 'A' from (0,1,A,a,B) is "less than" the 'B' from (0,1,B,c,C) which satisfies rule 5. This makes example 5 "less than" example 6; the loop breaks and we have our answer. <br /><br />So, for our 3 examples, we know that example 1 < example 5 and example 5 < example 6. Just like other places where inequalities are used, this information tells us that example 1 < example 6, or to be complete example 1 < example 5 < example 6.<br /><br />With the information and understanding we've covered in this post we are armed to tackle the gSpan algorithm itself. When I complete that blog post, I'll add a link to it right <a href="http://simpledatamining.blogspot.ca/2015/04/how-to-mine-frequent-patterns-in-graphs.html" target="_blank">HERE</a>. Hope this helps somebody out there!</div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-30702942308226759702015-03-10T21:06:00.003-04:002018-05-18T14:51:50.245-04:00Generalized Sequential Pattern (GSP) Mining<div dir="ltr" style="text-align: left;" trbidi="on"><script async="" src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><br /><div dir="ltr" style="text-align: left;" trbidi="on">This is going to be my first post about sequential data pattern mining. I'm starting this post by explaining the concept of sequential pattern mining in general, then I'll explain how the generalized sequential pattern (GSP) algorithm works along with its similarities to the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori method</a>.<br /><div><br /></div><div><b><u>Sequential Pattern Mining</u></b></div><div>If you're a store owner and you want to learn about your customer's buying behaviour, you may not only be interested in what they buy together during one shopping trip. You might also want to know about patterns in their purchasing behaviour over time. "If a customer purchases baby lotion, then a new-born blanket, what are they likely to buy next?" With information like this, a store owner can create clever marketing strategies to increase sales and/or profit. </div><div><br /></div><div>To get this data, the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori algorithm</a> has to be modified to account for this time delay between transactions. For this portion of the discussion I'll be referencing a technical paper by Rakesh Agrawal and Ramakrishnan Srikant entitled "Mining Sequential Patterns". Say you start with a database full of transactions that looks something like this.<br /><br /></div><div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-HlsycwDqQiY/WL8plgkn8II/AAAAAAAADYM/eQcVH7BFeBkQjd0vrDYHx6qwElCuI7JRgCLcB/s1600/GSP%2BRaw%2BTransactions.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-HlsycwDqQiY/WL8plgkn8II/AAAAAAAADYM/eQcVH7BFeBkQjd0vrDYHx6qwElCuI7JRgCLcB/s400/GSP%2BRaw%2BTransactions.jpg" width="320" /></a></div><br />Suppose that the transaction data represents the day of the month that the item was purchased. As a human, we can see some repeat customers, and make some guesses about some of the repeat patterns that might exist, but we want an automated approach. An easy way to find sequential patterns is to utilize a modified process similar to the one developed for the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Aprioi Algorithm</a> that is for single purchases. To do this we just sort the table first by customer ID, and then by transaction time stamp. We'll get something that looks like this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-r7TX_gj9Ayc/WL8pnJqL1UI/AAAAAAAADYQ/PWJ0DT9WIlAoYAeGg92nkHoHpS0Hp4C2QCEw/s1600/GSP%2BSorted%2BRaw%2BData.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="350" src="https://1.bp.blogspot.com/-r7TX_gj9Ayc/WL8pnJqL1UI/AAAAAAAADYQ/PWJ0DT9WIlAoYAeGg92nkHoHpS0Hp4C2QCEw/s400/GSP%2BSorted%2BRaw%2BData.jpg" width="400" /></a></div><br /></div><div>You'll also notice an extra column at the right that takes the sorted information and turns it into customer sequences. The reason for this is that we can utilize this format to data mine sequential patterns much like patterns within transactions are mined (e.g basket analysis). How people format the items in the sequence varies a little bit among the couple technical papers I've read, but let me explain how I've formatted it above. Every sequence of transactions starts with a '<' and ends with a '>'. That's all those symbols mean. The next thing you need to know is that any time 2 or more items (letters in my example) are surrounded by parentheses (e.g. "(AB)", which might be an apple and an orange), those items were purchased at the same time. If any items are not surrounded by parentheses, then that means that when those items were bought, the customer didn't buy anything else with them. So if we had a sequence like <A(BC)DE(FG)> that would mean that there were 5 different transactions. The customer first bought A, 2nd B and C together, 3rd D, 4th E, and 5th F and G together. Also, I'm not sure if you noticed, but I always have the items within parentheses ordered alphabetically. This is common to sort the items bought together from least to greatest (the fancy technical term for this is lexicographic order). With that understanding, now we can start talking about how the GSP Algorithm works.</div><div><br /></div><div><b><u>GSP Algorithm</u></b></div><div>The Generalized Sequence Pattern algorithm was created from a simpler algorithm for mining sequences, but it has some extra bells and whistles added so it can be more flexible for different situations. To explain the process, I'm going to start with the basics, then add the bells and whistles at the end. The beginning of the GSP algorithm is basically the same as the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori Algorithm</a>, because for 1 item, there really is no order. So, we find all of the items in our customer ID sequence database that meet the minimum support threshold. We'll use the data above and say that the minimum support is 2 in this case. If that's true we get the table below.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-GU8N5ZYRnX4/VtrvVTXGeJI/AAAAAAAACfw/a1WsmpitjSE/s1600/GSP%2BSingle%2BItem%2BSupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://1.bp.blogspot.com/-GU8N5ZYRnX4/VtrvVTXGeJI/AAAAAAAACfw/a1WsmpitjSE/s200/GSP%2BSingle%2BItem%2BSupport.jpg" width="166" /></a></div><br /></div><div>There are a couple of tricky things here. The more obvious one is that item E did not have enough support to meet the threshold, so it get's removed. Based on the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori principle</a>, we are able to conclude that because E is not frequent, then any sequence with E in it would not be frequent either. So, we don't need to keep track of it anymore.<br /><br />The next tricky is thing is that the support for B is listed at 5, not 6, even though the 3rd customer bought item B twice. This practice is also borrowed from the <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori algorithm</a>. In the Apriori algorithm, if a customer buys 2 candy bars at once, then we only count 1 candy bar when calculating the support, because we count transactions. The same applies here, except instead of counting how many <u>transactions</u> contain an item, or itemset, we are counting how many <u>customers</u> have an item/sequence. <br /><br />Once we have these items we start the iterative process of generating larger patterns from these items and checking if they have support. This is where things get really interesting. In Apriori, the total possible combinations would be AB, AC, AD, AF, AG, BC, BD, BF, BG, CD, CF, CG, DF, DG, FG. When generating sequences, that is not nearly sufficient because the order matters, AND we can have sequences where single items repeat (e.g. I bought A, then the next day I bought A again). So we have to create tables that look like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-iFZkp8foZbo/VPkGSFwN8zI/AAAAAAAAB6Q/Z88wJHll2Sk/s1600/GSP%2B2%2BItem%2BSequences%2B(1).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="173" src="https://3.bp.blogspot.com/-iFZkp8foZbo/VPkGSFwN8zI/AAAAAAAAB6Q/Z88wJHll2Sk/s1600/GSP%2B2%2BItem%2BSequences%2B(1).jpg" width="200" /></a></div> Because order matters, there are a lot more options. Oh and in case you forgot, there is also the possibility of 2+ items being bought in the same transaction<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-_HU7wXo6x44/VPkGT-3DXnI/AAAAAAAAB6Y/gdrGf3UAHyU/s1600/GSP%2B2%2BItem%2BSequences%2B(2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-_HU7wXo6x44/VPkGT-3DXnI/AAAAAAAAB6Y/gdrGf3UAHyU/s1600/GSP%2B2%2BItem%2BSequences%2B(2).jpg" /></a></div>The diagonal values are blank here because they're already covered in the first 2-item table. Just to make sure you understand, (AA) isn't there because if A and A were bought in the same transaction it would only be counted once so we never have two of the same item together within a parentheses. The lower left is also blank, but this is because of the convention I already described of always listing items purchased together in ascending order. <br /><br />So, now that we understand all of that, we can check our 51 possible 2-item sequences against the database to see if they meet the support threshold. I'll walk you through the first row of 2-item sequences to show you how this is done (see the table below).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-r8w0N8JWGgg/VPkJ9l0KlpI/AAAAAAAAB6k/aBzqLgfeQv8/s1600/GSP%2B2-item%2Bsupport%2Bcounting.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="136" src="https://4.bp.blogspot.com/-r8w0N8JWGgg/VPkJ9l0KlpI/AAAAAAAAB6k/aBzqLgfeQv8/s1600/GSP%2B2-item%2Bsupport%2Bcounting.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br />In the first column, we see that there are no customers that ever bought item A on more than 1 occasion, so support there is 0. For <AB>, you can see that customers 1 and 5 bought those items in that sequence. For <AC>, notice that customer 4 bought (AB) together then C, but we get to pick out just A from that combined transaction to get our <AC> sequence. Notice that the items don't have to be right next to each other. It's OK to have other items in between. <AD> follows similar logic. For <AF> we don't count customers 3 or 4 because although they have A and F, they aren't in the right order. To be counted here, they need to be A, then F. When we look for the support of <FA> those will get counted. There is one last rule you need to know for counting support of sequences. If you are looking for support of a sequence like <FG>, then customer 1 doesn't count. It only counts if F is bought before G, NOT at the same time. If you work through the rest of the 2-item sequences that <u>weren't</u> purchased together (i.e. not ones like (AB)), you get support counts that look like this<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-gAf5g3StLoc/VPkR9iXvYVI/AAAAAAAAB68/5XmGDn4AgPc/s1600/GSP%2B2-item%2Bordered%2Bsupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-gAf5g3StLoc/VPkR9iXvYVI/AAAAAAAAB68/5XmGDn4AgPc/s1600/GSP%2B2-item%2Bordered%2Bsupport.jpg" /></a></div>You see that I have struck out any 2-item sequence that do not meet the support threshold of 2. Now let's take a look at the 2-item sets that could have been purchased together (e.g. (AB)). This one is a little bit easier, because there are only 5 times in our database when 2 items were purchased at the same time. They were (FG), (AB), (AB), (BC) and (DE). Only (AB) shows up twice so we keep it and get rid of the rest. Now we have these 2-item sequences left to work with:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-S7XqCFBE1Vs/VPkT1JWlvwI/AAAAAAAAB7I/I3WWKpf4Vmg/s1600/GSP%2B2-items%2Bwith%2Bsupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="27" src="https://3.bp.blogspot.com/-S7XqCFBE1Vs/VPkT1JWlvwI/AAAAAAAAB7I/I3WWKpf4Vmg/s1600/GSP%2B2-items%2Bwith%2Bsupport.jpg" width="400" /></a></div><br />Now we start the next iteration looking for 3-item sequences. This is where I'm going to start referring to "Mining Sequential Patterns: Generalizations and Performance Improvements" by Ramakrishnan and Rakesh Agrawal. In that paper they describe a process for generating candidate sequences that I think is pretty cool, now that I understand it. You take all of the remaining sequences from the last step (AB, AC, AD, AF, AG, BC...), then you remove the first item from the sequence. For AB, we remove A and we're left with B. Do this for all of the sequences (see 2nd column in table below). Then you do the same thing, but remove the last item (3rd column in table below) . This is pretty straightforward except when you're dealing with a sequence that ends in a multiple item purchase like (AB). When that happens, you have to remove all of the possible items one at a time and treat them separately. So if you're removing the "first" item from (AB) you remove A and get B leftover, and then you remove B and get A leftover. Hopefully I haven't lost you in that explanation. If I did, I think reviewing this table will demonstrate what I'm talking about. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-uaCTU2N3ifM/VP9XhqHp-xI/AAAAAAAAB8Q/WY5sMl8D36o/s1600/3-seq%2Bfirst%2Band%2Blast.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/-uaCTU2N3ifM/VP9XhqHp-xI/AAAAAAAAB8Q/WY5sMl8D36o/s1600/3-seq%2Bfirst%2Band%2Blast.jpg" width="100" /></a></div>Now that we have this information (if we were programming this we wouldn't actually create a table...it's just to make it easier to see and understand for this blog post), We combine the sequences together where their "-1st" and "-Last" columns match. So if we're starting from the top, we see that the 2-item sequence AB matches up with BC, BD, BF, BG and (AB) to create ABC, ABD, ABF, ABG and A(AB). The A(AB) one is tricky because you have to remind yourself that the order within the parentheses is just convention and it could easily be written A(BA) which makes it easier to see that the B's match up. Working through the rest of the table this way (being careful not to create any duplicate candidates) we populate the "3-seq after join" column in the table below<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-80RevE18pCs/VP9bXlOsaZI/AAAAAAAAB8k/FQzfTMyZW-g/s1600/3-Seq%2BGeneration%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://2.bp.blogspot.com/-80RevE18pCs/VP9bXlOsaZI/AAAAAAAAB8k/FQzfTMyZW-g/s1600/3-Seq%2BGeneration%2BTable.jpg" width="516" /></a></div>The next step is to prune these 3-item sequences. Many of them have portions of their 3-item sequences that aren't supported from the previous round of 2-item sequences. A good example of this is the candidate sequence AFA. AF and FA are supported 2-item sequences, but AA is not a supported 2-item sequence, so this one gets pruned out. This pruning is based on the same <a href="http://simpledatamining.blogspot.ca/2015/02/apriori-principle_17.html" target="_blank">Apriori principle</a> that we have used in prior posts. This pruning helps save computing time because we don't want to comb through the database to find the support for a sequence that we know won't meet the support threshold. Now that we have this slimmed down list of 3-item sequences, we scan the database again to determine support just as it was described above for 2-item sequences. When that is completed we end up with the 3-item sequences in the last column of the table above. <br /><br />At this point it is just "lather, rinse, repeat" as my shampoo bottle suggests. Take that list of 3-item sequences and generate 4-item sequences using the same method. We remove the first and last items and put them in the "-1st" and "-Last" columns below.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-pMaTcgPG9l0/VuHrWkBtU1I/AAAAAAAACgg/2ttUn9_9KOknlUT3MXynG8mxNqMGE1Xeg/s1600/4-seq%2Bfirst%2Band%2Blast.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-pMaTcgPG9l0/VuHrWkBtU1I/AAAAAAAACgg/2ttUn9_9KOknlUT3MXynG8mxNqMGE1Xeg/s1600/4-seq%2Bfirst%2Band%2Blast.jpg" /></a></div><br />Then we look for sequences that have a "-1st" value that match a "-Last" value of another sequence. In this case we only find matches for "BF" and "BG". You should prune the sequences if they aren't supported by 3-item sequences that are supported/frequent, but in this case they're fine. Then we check the database for support to finalize the list. If you do that you end up with information similar to the table below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-9v0JhPrs8tI/VP-HXT3wFwI/AAAAAAAAB84/iu1zrZrtJkY/s1600/4-seq%2Bgeneration%2Btable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="61" src="https://2.bp.blogspot.com/-9v0JhPrs8tI/VP-HXT3wFwI/AAAAAAAAB84/iu1zrZrtJkY/s1600/4-seq%2Bgeneration%2Btable.jpg" width="400" /></a></div><br />You can see that, at this point, this example gets pretty simple. If you try the iteration again, the trick of removing the first item and last item yields no matches and you're done. After that you would just output all of the supported/frequent sequences to the user, which in this case would be the following (as long as I don't make a copy and paste mistake :/ ).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-68tqRS8RVc4/VP-InfalBZI/AAAAAAAAB9E/an6rVs7q9F0/s1600/Final%2BList%2Bof%2BSequences.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://4.bp.blogspot.com/-68tqRS8RVc4/VP-InfalBZI/AAAAAAAAB9E/an6rVs7q9F0/s1600/Final%2BList%2Bof%2BSequences.jpg" width="231" /></a></div><b><u>Bells and Whistles</u></b><br />I promised to describe the bells and whistles for this process as well. Here's a list of extras that can be added<br /><br /><ol><li>Taxonomies/Hierarchies: Harry Potter and the Sorcerer's Stone is part of a series by J.K.Rowling, which is part of the children's fantasy genre. Taxonomies like this can be incorporated so that more general sequences can be found if the more specific ones aren't supported. The way that the algorithm handles these taxonomies is to add in the higher level items (e.g. J.K. Rowling is higher level than Harry Potter) to the combined transactions. For example if you had a sequence like this <A(1, B)A> then you would translate that into a sequence like <(A, Letter, Symbol)(1, Number ,B , Letter, Symbol)(A, Letter, Symbol)> if you had a taxonomy like the one here:<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gB-M2Z3leaQ/VP-OKS5GESI/AAAAAAAAB9U/KrVPaEpUb2Q/s1600/Taxonomy%2Bexample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="214" src="https://2.bp.blogspot.com/-gB-M2Z3leaQ/VP-OKS5GESI/AAAAAAAAB9U/KrVPaEpUb2Q/s1600/Taxonomy%2Bexample.jpg" width="320" /></a></div></li><li>Window Size: If you've had a customer for years, maybe it isn't that interesting if they support a particular pattern because they have so much buying behaviour on record that they might just support it by chance. Window sizes limit the span of time when transactions are deemed to support a sequence. (e.g. I only want buying patterns for pregnant women during a 9 month window)</li><li>Max-gap: This parameter is defined by the user to filter out large gaps in data sequences. Let's say a customer buys an iPod, then even though they are within the specified window size, they have a large gap between that purchase and a new set of headphones. To a business owner, this gap might make them "uninteresting" as a customer to market to, so the GSP algorithm can filter this out as it is running and checking for support</li><li>Min-Gap: Think of this as answering the question "how much time would I let pass between transactions before I would consider 2 purchases separate elements in my sequences?" An example of this would be me at Home Depot. I go to start a project on a Friday evening and buy what I think I need. It isn't until Saturday about noon that I realize I had no idea what I REALLY needed and go back for more stuff. Some store owners may want to treat these as one "transaction" or event. If the min-gap was set to 24 hours in this case, all of my purchases would be grouped together to look something like this <(Friday night stuff, Saturday noon stuff)>.</li></ol><div>I admit that there are more details to how you might actually implement these bells and whistles if you had to code them. For those that are interested in that I recommend you let me know you want another post on that. Otherwise, I refer you to "Mining Sequential Patterns: Generalizations and Performance Improvements" by Ramakrishnan and Rakesh Agrawal.</div><div><br /></div><div>For those of you with the perseverance to get to the end of this lengthy post I congratulate you! As always, I hope this was helpful for somebody out there.</div></div></div></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-30053250182876921932015-03-03T21:39:00.000-05:002018-05-18T14:56:40.723-04:00Null-Invariant Measures of Interestingness<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script>In 2 of my previous posts about measures of interestingness (<a href="http://simpledatamining.blogspot.ca/2015/02/lift.html" target="_blank">Lift</a> and <a href="http://simpledatamining.blogspot.ca/2015/02/chi-squared.html" target="_blank">Chi-Square</a>) I mentioned I would be writing a post about null-invariant measures of interestingness. I'm a man of my word so here you go. The problem I presented in those posts can be summarized with these 2 tables.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-MLeac0dbD-k/VvmmClJEmEI/AAAAAAAACg4/HmkN664kyFYYhM0RC_1mLsaMnPsZ57EaQ/s1600/Lift%2B%2526%2BChi-squared%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="75" src="https://2.bp.blogspot.com/-MLeac0dbD-k/VvmmClJEmEI/AAAAAAAACg4/HmkN664kyFYYhM0RC_1mLsaMnPsZ57EaQ/s320/Lift%2B%2526%2BChi-squared%2BTable.jpg" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-hyHrwSb8WBc/VvmlvBCF5LI/AAAAAAAACg0/kk-m9u4lSu8xxYX1UhvH-BMvzt33SIK_A/s1600/Lift%2B%2526%2BChi-squared%2BTable%2B%2528revised%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="75" src="https://2.bp.blogspot.com/-hyHrwSb8WBc/VvmlvBCF5LI/AAAAAAAACg0/kk-m9u4lSu8xxYX1UhvH-BMvzt33SIK_A/s320/Lift%2B%2526%2BChi-squared%2BTable%2B%2528revised%2529.jpg" width="320" /></a></div><br />If you click back to those previous posts (links above) you'll see that the lift for the first table was 0.714 and the lift for the 2nd table would be calculated as 0.929. The chi-square value for the first value was 0.586 and the value for the 2nd is calculated as 0.022. For both these measures we get different numbers when the size of the bottom right cell ("null" count) is changed. The table below shows this information in a slightly different view.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-zZWI9AkMbVU/VvmmqAbYkRI/AAAAAAAAChA/wetkRcF1pcQ87JiYBlKJ--GkDvD6XUbiQ/s1600/Null-Invariant%2BTable1%2B%2528revised%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="102" src="https://1.bp.blogspot.com/-zZWI9AkMbVU/VvmmqAbYkRI/AAAAAAAAChA/wetkRcF1pcQ87JiYBlKJ--GkDvD6XUbiQ/s400/Null-Invariant%2BTable1%2B%2528revised%2529.jpg" width="400" /></a></div><br />Since we're pattern mining, and we're really only interested in the interaction between the two items (apples and oranges in this example), we'd love to have a way to measure interestingness that doesn't include that "null" count in the calculation at all. Data scientists call this type of an interestingness measure "null-invariant". Let's see what they've come up with to help us out here.<br /><br />Based on a white paper entitled "Selecting the Right Interestingness Measure for Association Patterns" written by Pang-Ning Tan, Vipin Kumar and Jaideep Srivastava in 2002, there are 3 measures for interestingness that have this "null-invariant" property that we're looking for. The three that they found are Confidence, Cosine, and Jaccard. The Confidence measure actually has at least 3 variations to it now that I'm aware of so we'll cover 5 different null-invariant measures in total here.<br /><br /><b><u>Confidence</u></b><br />If you've read my post about <a href="http://simpledatamining.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">support and confidence</a>, you may remember near the end that I mentioned that confidence gives different values based on the way it's calculated (e.g the confidence for Apples <span style="background-color: white; font-family: "wingdings"; font-size: 16px; line-height: 22.3999996185303px;">à</span> Oranges may not be the same as the confidence for Oranges <span style="background-color: white; font-family: "wingdings"; font-size: 16px; line-height: 22.3999996185303px;">à</span> Apples). If that is the case, which one should you pick? With so many potential patterns that can be found in data already, it is preferable for data scientists to find a way to combine these 2 options to get 1 answer instead of having to duplicate all of their results to show both sides of the same coin all the time. To do this, they have taken the original confidence equation P(A|B), or P(B|A) depending on how you look at it, and created the following alternate forms<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-QMaUdP-JPRc/VPYu5LplpZI/AAAAAAAAB4E/HhHcwkvNBJw/s1600/Null-invariant%2Bconfidence%2Bdefinitions.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-QMaUdP-JPRc/VPYu5LplpZI/AAAAAAAAB4E/HhHcwkvNBJw/s1600/Null-invariant%2Bconfidence%2Bdefinitions.jpg" /></a></div><br />At this point in time it is my duty as your guide to understanding data mining in a simple way to calm you down if the stuff in the definition column gave you heart palpitations. All three of these are actually VERY easy to understand if you already understand <a href="http://simpledatamining.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">confidence</a>. I'm actually going to work from the bottom of this list up. Max Confidence basically says, I'm just going to pick the biggest value of confidence between the 2 ways of looking at it. Kulczynski (pronounced K<span style="font-size: x-small;">Ə</span>l-zin’-skee) takes the approach of averaging the 2 together instead. For, All Confidence, it may not be apparent, but you're actually taking the minimum value of confidence. Since confidence is just the count (or support) of records that have both items (A<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">∩</span>B) divided by the count of the first item in a statement like A<span style="background-color: white; font-family: "wingdings"; font-size: 16px; line-height: 22.3999996185303px;">à</span>B. If you're dividing by the maximum value of the support for A<span style="background-color: white; font-family: "wingdings"; font-size: 16px; line-height: 22.3999996185303px;">à</span>B and B<span style="background-color: white; font-family: "wingdings"; font-size: 16px; line-height: 22.3999996185303px;">à</span>A, then you're really just finding the minimum value of confidence between the 2 options. So, like I said, it might look scary, but it's simple.<br /><br /><b><u>Cosine</u></b><br />Another good interestingness measure is the cosine function. The way that I think about the cosine function is that is essentially the lift function in disguise, but is null invariant. Let me show you what I mean. Here are the equations for both<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ycEvgBrj_IE/VPZg2cXequI/AAAAAAAAB4U/euYfAYISU7w/s1600/Cosine%2Bto%2BLift%2BComparison.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-ycEvgBrj_IE/VPZg2cXequI/AAAAAAAAB4U/euYfAYISU7w/s1600/Cosine%2Bto%2BLift%2BComparison.jpg" /></a></div><br />You can see that the only real difference between the 2 is that Cosine has a square root in the denominator. There's a good reason for this. In my post about lift, I mentioned that to calculate lift correctly, you have to make sure you are using the support as a percent of the total records in the database. If you don't do this, the lift value is wrong. The fact that you have to divide by the grand total of all the records is also why the lift measure is susceptible to changes in the number of null records. Cosine overcomes this with a trick I was taught in intermediate school<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-wCeRMq8Ey50/VPZi5sZRZGI/AAAAAAAAB4g/y_V6SuP_XG0/s1600/Cosine%2Bfraction%2Bexplanation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="135" src="https://2.bp.blogspot.com/-wCeRMq8Ey50/VPZi5sZRZGI/AAAAAAAAB4g/y_V6SuP_XG0/s1600/Cosine%2Bfraction%2Bexplanation.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">When I first saw the lift equation written out this way I really wanted to get the "grand total" values to cancel out because it would be so much cleaner. Another way of writing (or think about) lift is something like this (AB/G)/((A*B)/G^2) where AB is the count of A & B together, A is the count of A, B is the count of B, and G represents the grand total. Using fraction math, you can cancel out one of the grand totals, but not the 2nd one leaving you with (AB)/(A*B/G). Cosine takes care of this problem. When we take the square root of the denominator, we can cancel out the grand total from the function entirely making the measure null-invariant. It looks like this: (AB/G)/sqrt((A*B)/G^2), which simplifies to AB/sqrt(A*B).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b><u>Jaccard</u></b></div><div class="separator" style="clear: both; text-align: left;">The Jaccard function is defined as |A<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">∩</span>B|/|A<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">∪</span>B|. To explain what that means let's go to Venn diagram world.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-0Xec064LuNE/VPZnPzq6s6I/AAAAAAAAB4s/fCGgdTejutU/s1600/AB%2BVenn%2BDiagram.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="156" src="https://3.bp.blogspot.com/-0Xec064LuNE/VPZnPzq6s6I/AAAAAAAAB4s/fCGgdTejutU/s1600/AB%2BVenn%2BDiagram.jpg" width="200" /></a></div><div class="separator" style="clear: both; text-align: left;">As a reminder, that upside-down U symbol is an intersection and is represented by the orange area in the diagram above. The right-side-up U symbol is a union and is represented by all of the area that is either yellow, orange or red. The '|' symbols in the formula are just a notation that is used to indicate that we want the count of items in that set, instead of the set itself. Now that that is fresh in our minds again, the numerator of the Jaccard is the Orange area, and the denominator is the area of the yellow-orange-red area, NOT double counting the orange area because of the perceived overlap. With that understanding, you can see why Jaccard is null invariant; it doesn't use any data from the blank space, null records, to be calculated. I could make the white space in Venn diagram above HUGE and the Jaccard would still have the same value. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b><u>Comparisons</u></b></div><div class="separator" style="clear: both; text-align: left;">Lastly, let's take a look at our apple and orange example one more time with all of these different measures of interestingness.</div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-LYdaszzfEAk/VvmnZ0rUYcI/AAAAAAAAChI/Vgq5X9CSHCoJ10hZNhtG7PBMyJB0bpLgA/s1600/Null%2BVariant%2BMeasure%2BReview%2B%2528revised%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="164" src="https://4.bp.blogspot.com/-LYdaszzfEAk/VvmnZ0rUYcI/AAAAAAAAChI/Vgq5X9CSHCoJ10hZNhtG7PBMyJB0bpLgA/s640/Null%2BVariant%2BMeasure%2BReview%2B%2528revised%2529.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">First of all, in the 2nd row, you can see that all of the measures we've added to this table aren't affected by the different "null" number. I also have added a couple of rows to demonstrate how the different measures change with different values in the table. In row 3 you can see a what a positive correlation looks like, instead of a negative one. In row 4, you can see what no correlation looks like. It's interesting to note that for Jaccard no correlation is represented by a value of 1/3. Then in rows 5 and 6 I showed the extremes of these null variant measures...all of them are bound between 0 and 1. 0 means STRONG negative correlation, and 1 means STRONG positive correlation. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Row 7 is there as a tale of caution. Based on an astute comment on this post, revised/change the table above so that you can compare rows 1, 2, and 3. They all have the same values except for the "null" number. If the "null" number gets too big, like in row 7, then "someone who has bought an orange is far more likely than the general population to buy an apple". This is not immediately obvious, and can be a stumbling block when using null-invariant measures of interestingness.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I think that sums it up. I hope that helps!</div><br />Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-59105872277164465862015-02-28T15:33:00.005-05:002018-05-18T14:56:10.676-04:00Chi-Squared<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><div dir="ltr" style="text-align: left;" trbidi="on">Another method of "interestingness" that can be used in data mining is the chi-square test. This test is actually one that I have a fair amount of experience with from my six sigma and manufacturing background. I'll use a simple example from my past to walk you through how it is calculated first, then give some warnings about ways it can be misused.<br /><br />Suppose you have a factory that makes widgets and have 2 machines in the manufacturing process that perform the same step, say they're both plastic injection molding machines. When parts come out you have an inspector that classifies the parts as good or bad. Over time, you collect all of this data and come up with a table like the one below.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/--u7PzjoD7M8/VPIaZ11izqI/AAAAAAAAB14/-z8IBPbc6Xo/s1600/Chi-square%2Bdefect%2Btable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="88" src="https://3.bp.blogspot.com/--u7PzjoD7M8/VPIaZ11izqI/AAAAAAAAB14/-z8IBPbc6Xo/s1600/Chi-square%2Bdefect%2Btable.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><span id="goog_1305597823"></span><span id="goog_1305597824"></span><br /></div>I looks like they produced roughly the same quantity of bad parts, but machine 2 also made less parts overall. What chi-squared will do is help us determine if that difference is (statistically) significant. The first thing we need to do is determine what we would <i>expect</i> if the 2 machines had the same quality. To get this expectation we use the values in the row total and column total. If there were no real difference between the 2 then we would expect there to be a good ratio of 605/657=92.08% for both machines. To get the bad ratio (defect rate) we just take 1 minus the good ratio and we get 7.92%. Now we just need to account for the fact that the 2 machines produced different quantities. With these <i>expected</i> good/bad ratios we can calculate how many good parts we expect if we produce 400 or 257 parts. We would <i>expect </i>that machine 1 would produce 400*92.08% = 368.34 good parts and 400*7.92%=31.66 bad parts. We would <i>expect</i> that machine 2 would make 257*92.08% = 236.66 good parts and 257*7.92% = 20.34 bad parts. this gives us an <i>expected</i> table that looks like this.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-xr_cbRzImpA/VPIdaWu7HwI/AAAAAAAAB2M/4k4RG3BWhOs/s1600/chi-square%2Bexpectation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="106" src="https://3.bp.blogspot.com/-xr_cbRzImpA/VPIdaWu7HwI/AAAAAAAAB2M/4k4RG3BWhOs/s1600/chi-square%2Bexpectation.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The formula to calculate any expected value in the table above is [Row Total]*[Column Total]/[Grand Total]. If you look back to the logic in the paragraph above, you'll see that is exactly what we did.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now that we have expected values, we can calculate the chi-square statistic. This is done by taking each cell (e.g. good parts from machine 1 as one example) and calculate (observed value - expected value)^2/(expected value). For the upper left cell this would be (375-368.34)^2/368.34 = 0.1203. After doing this for each cell we get a table that looks like this.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-H1xiVE-GJ-Q/VPIfKJOAvxI/AAAAAAAAB2Y/NpBI9IOjcqQ/s1600/Chi-square%2Bvalues.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-H1xiVE-GJ-Q/VPIfKJOAvxI/AAAAAAAAB2Y/NpBI9IOjcqQ/s1600/Chi-square%2Bvalues.png" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Then you just have to add these up to get you your chi-square statistic; in this example it's 3.89. A chi-square statistic doesn't mean much unless you know how many degrees of freedom you have. Don't know what <a href="http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)" target="_blank">degrees of freedom</a> means? That's OK. For now, all you need to know is that for a chi-square test it is equal to (# of rows - 1)*(# of columns - 1). We just have a 2x2 table here so we get (2-1)*(2-1) = 1 degree of freedom. The 3.89 chi-squared value and the 1 degree of freedom are used to lookup a p-value. I think of a p-value as a <b><u>p</u></b>robability value. When you're looking at a p-value, you are looking at the expected probability that nothing interesting is going on in your data. So, if you want to find something interesting, you're hoping for REALLY small p-values. In order to get the p-value I almost always use MS Excel. The Excel function would look like this "=CHIDIST(3.89,1)". For our problem, we get a p-value of 0.0486. This can be interpreted that we think there is 4.86% chance that nothing interesting is happening here. A common threshold that statisticians use for this p-value is 5%. Since, our 4.86% is less than 5%, we would say that this difference is statistically significant. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now that we know the mechanics of how to calculate it, let's talk briefly about the intuition behind the numbers. If 2 features (e.g. good/bad and machine 1/2) have nothing interesting going on what is the chi-square value going to be? To start, you would <i>expect</i> the values in the expectations table to be exactly the same as the data you started with. Once you know that, you also know that all of the values in the chi-square table will be zero; (observed-expected)^2/expected will give you 0 divided by something, which is zero. The chi-square value can never be negative because the numerator is a squared value, and the denominator is an expected number of positive counts. The least interesting thing possible is 0, and the most interesting thing would be...some ridiculously large number (theoretically infinity), but in real life you don't ever get infinity as the answer.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now, after all that explanation, I've got to let you down a little bit because chi-square isn't very good for data mining applications for the same reason why the <a href="http://simpledatamining.blogspot.ca/2015/02/lift.html" target="_blank">lift</a> measure has problems. If we use the same example that I used in my lift post, we have transactions for apples and oranges like this</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-N4SuW3P4SxQ/VPIlAbLy-RI/AAAAAAAAB2o/dslc9KUtFBw/s1600/Lift%2B%26%2BChi-squared%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="76" src="https://3.bp.blogspot.com/-N4SuW3P4SxQ/VPIlAbLy-RI/AAAAAAAAB2o/dslc9KUtFBw/s1600/Lift%2B%26%2BChi-squared%2BTable.jpg" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-SdYFAp_HYrI/VPIlBzJRgYI/AAAAAAAAB2w/AxkmT5BzQ8A/s1600/Lift%2B%26%2BChi-squared%2BTable%2B(2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="78" src="https://3.bp.blogspot.com/-SdYFAp_HYrI/VPIlBzJRgYI/AAAAAAAAB2w/AxkmT5BzQ8A/s1600/Lift%2B%26%2BChi-squared%2BTable%2B(2).jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">If I follow the instructions above to calculate chi-square for the first table, I get 0.586. If I do the same thing for the 2nd table, I get 714,283.7...hmmm. In theory the interaction between apple and orange purchases are the same in both tables, but the chi-square statistic gets very confused by the double null transactions. That is why null-invariant measures for interestingness are so important when mining data patterns (another post to come soon explaining these measures).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><span id="goog_1167676704"></span></div><br /></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-53768959395406275962015-02-26T22:02:00.001-05:002015-02-26T22:04:55.901-05:00Lift<b><u>Lift</u></b> is an objective measure of "interestingness" that has been used in various fields including statistics. It is also an option when we are data mining. In case you've run across this measure and don't fully understand it, let me give you a quick summary of what it is, how it's calculated and some of its properties.<br /><br />Much like the properties of <a href="http://simpledatamining.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">support and confidence</a>, lift attempts to use a numerical calculation to determine how importance a pattern or correlation is. If we use the table below as a simple example we can kind of see that people who buy oranges, are less likely to buy an apple and vice versa (this is called a negative correlation), but in a data mining process we want to have an automated way to see this. Lift is one option to automate and determine this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ekQ9bYHgWF8/VO_XVPqkWgI/AAAAAAAAB00/VZmV0EJ8X6s/s1600/Lift%2B%26%2BChi-squared%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ekQ9bYHgWF8/VO_XVPqkWgI/AAAAAAAAB00/VZmV0EJ8X6s/s1600/Lift%2B%26%2BChi-squared%2BTable.jpg" height="78" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>To calculate lift, first find the support for both items together (in this case 2/20=10%.) and this will be your <a href="http://en.wikipedia.org/wiki/Fraction_(mathematics)" target="_blank">numerator</a>. Notice that I used the "% of total" version of support. If you don't do this you will get a different answer and it will be wrong. For the denominator, multiply the total support for both items together (bottom left and top right total values); in this case that would be 8/20 * 7/20 = 0.14. The lift for buying apples and oranges together would be ~0.714 for this example.<br /><br />So what does this 0.714 mean really? If, the lift were to work out to equal exactly 1, then that would mean that there is no correlation at all between the 2 items. Since, our lift is less than 1, it means that there is a negative correlation (that's what we could see with our own intuition before we started). If the lift turns out to be greater than 1, then there is a positive correlation. If you look at how lift is calculated you will notice that because all of the values that go into the fraction are positive counts (or fractions of positive counts), the value of lift always has to be positive (<u>></u>0). It can get really BIG though. If I change the table above to have lots of transactions without apples or oranges, then I can get a really big number for my lift<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-poGkPbj6qlg/VO_c4lSbd-I/AAAAAAAAB1E/i1Hvii_mWXw/s1600/Lift%2B%26%2BChi-squared%2BTable%2B(2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-poGkPbj6qlg/VO_c4lSbd-I/AAAAAAAAB1E/i1Hvii_mWXw/s1600/Lift%2B%26%2BChi-squared%2BTable%2B(2).jpg" height="78" width="320" /></a></div><br />If you do the same calculation we did above on this table, you get a lift of 357,143.32. This huge swing in the lift value is the greatest limitation of lift in data mining. The only thing I changed in the data I was analyzing was the number of transactions that didn't have apples or oranges. Intuitively this shouldn't make any difference whether the correlation is interesting or not. That is why data mining has developed other measures of interestingness that are called null-invariant. Null-invariant measures aren't sensitive to that lower right value in the table. Eventually I'll write a blog post about those and add a link here, but that won't be tonight.<br /><br /><br /><br />Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-78042173468336737942015-02-24T21:16:00.003-05:002015-03-02T21:19:52.165-05:00Core Patterns of Colossal Patterns<div class="separator" style="clear: both; text-align: center;"><br /></div>Professor Jiawei Han at the University of Illinois and some of his colleagues created a method of data mining to find what are called colossal patterns. "Colossal" in this context basically means really big patterns with lots of different items included. As an example of this, think about people working in biostatistics looking for large sequences of DNA that are interesting. The "Pattern-Fusion" method that they created has 2 parts to the algorithm.<br /><ol><li>Use a regular pattern generation method (e.g. Apriori, FP Growth, etc.) to create the first couple levels of patterns that are supported. A common level of complexity for this step to stop is when the algorithm has 3-item patterns (e.g. ABC), but this can be selected by the user when this algorithm is run</li><li>Take these large-ish patterns and combine them together to make bigger patterns faster than incrementally looking for patterns that only have 1 more item.</li></ol>The reason why this method is so valuable can be seen if you look at how many possible combinations(patterns) are possible as the number of items increases. The table below shows the number of <a href="http://en.wikipedia.org/wiki/Combination" target="_blank">combinations</a> possible if you have 5, 10, or 15 items in each column, and pick 1-15 items in the rows.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-GsqbaaBlhxo/VOvd-6VAlHI/AAAAAAAABzc/hucj3x2GO24/s1600/Colossal%2BPattern%2BExplosion.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-GsqbaaBlhxo/VOvd-6VAlHI/AAAAAAAABzc/hucj3x2GO24/s1600/Colossal%2BPattern%2BExplosion.jpg" height="320" width="254" /></a></div><span id="goog_1044054204"><br /></span>If you're really only interested in finding the BIG patterns that the data supports, you don't really want to get bogged down in looking at 6,435 patterns that have 7 items in them (see the last column above). Plus, this tabular example above is VERY simplified, how many sequences are in a strand of DNA? Oh yeah! Old algorithms might get bogged down forever in that mess in the middle for that type of problem. <br /><br /><br /><strong><u>Core Pattern and Pattern Robustness</u></strong><br />Now that I've summarized the conceptual process, I'm going to explain core patterns and pattern robustness. Core patterns are the ones that the algorithm "likes" to use in order to create the BIG patterns it's really looking for. I got my butt kicked by these 2 concepts during my quiz for professor Han's Coursera course, so I figured I should probably take the time to really understand them and explain them simply for everybody else.<br /><br /><br />In Han's textbook, <u>Data mining : concepts and techniques</u>, he says that... <br />"for a pattern <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>, an itemset <span style="font-family: Symbol; font-size: 11pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-font-family: Calibri; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-char-type: symbol; mso-fareast-font-family: SimSun; mso-fareast-language: ZH-CN; mso-fareast-theme-font: minor-fareast; mso-hansi-font-family: Calibri; mso-hansi-theme-font: minor-latin; mso-symbol-font-family: Symbol;"><span style="mso-char-type: symbol; mso-symbol-font-family: Symbol;"><span style="font-family: "Calibri",sans-serif; font-size: 11.0pt; line-height: 107%; mso-ansi-language: EN-US; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: Calibri; mso-fareast-language: EN-US; mso-fareast-theme-font: minor-latin; mso-hansi-theme-font: minor-latin;">β</span><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">⊆</span></span></span><span style="font-family: "Calibri",sans-serif; font-size: 11.0pt; line-height: 107%; mso-ansi-language: EN-US; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: Calibri; mso-fareast-language: EN-US; mso-fareast-theme-font: minor-latin; mso-hansi-theme-font: minor-latin;">α</span> is said to be a <span style="font-family: Symbol; font-size: 11pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-font-family: Calibri; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-char-type: symbol; mso-fareast-font-family: SimSun; mso-fareast-language: ZH-CN; mso-fareast-theme-font: minor-fareast; mso-hansi-font-family: Calibri; mso-hansi-theme-font: minor-latin; mso-symbol-font-family: Symbol;"><span style="mso-char-type: symbol; mso-symbol-font-family: Symbol;">t</span></span>-<strong>core pattern</strong> of <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span> if <span style="font-family: "Calibri","sans-serif"; font-size: 11pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: SimSun; mso-fareast-language: ZH-CN; mso-fareast-theme-font: minor-fareast; mso-hansi-theme-font: minor-latin; mso-text-raise: -11.5pt; position: relative; top: 11.5pt;"><img height="33" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAHQAAAAhCAMAAAA/B39RAAAAAXNSR0IArs4c6QAAAG9QTFRFAAAAAAAAAAA6AABmADqQAGa2OgAAOgA6OgBmOjqQOmZmOma2OpCQOpDbZgAAZgA6ZgBmZjqQZma2ZrbbZrb/kDoAkDo6kDpmkNvbkNv/tmYAtmY6tv//25A627Zm29v/2////7Zm/9uQ//+2///b4KqTMwAAAAF0Uk5TAEDm2GYAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJlAE1pY3Jvc29mdCBPZmZpY2V/7TVxAAABzElEQVRYR+2V6XKDMAyE7Ry90pAe0BYStwHz/s/YlS9kTI7JMP7TejKZHEKftF5bov88iGglP8R/z/LNMFop5dLBs0H7j1qo1dF0kQ3aPRyE3tV5oe1aCP1i9c3WaVOK7PKSsu2iFn0ln3dfYzvP4tf0hDTOvPsajrpaXjU4/vq6rHMYQ78Jtfl+t52i6+3ZXO3y0FfwAlvei8lzJKNZKpxMH0McxdIo6WOn6A0iQzIKaJOE9jFcAj5ps0Wpl2RBGe70mtYlq0IXEKK7gwN9D6zcIRZ7EKlxBRT5ununzL6mgxWWLsAzbxN74WP7arxFForykzWkbkKn/Q9dW9PQ+B+IMsSi04h7sVNsRlCPen48RlAmb+I6HhtxHXTP60dW9x15YmsKxSvAFpORBluoKDyOFSo48jw08Rg0bEu/ieT+RR0fGeZeEzvpUuf3E52mUGqdQQWulGBtGx3OqZfJuJiW65RGqPl8Wt4o3qalcTTHIigf4uMiOGNs05v5BOFD/Bz0Zsj4QYLwIZ4N6oe4LuQ6G9QNcUycbpML6od4t2lWx1xQP8Rxj6gyF9R5C+rilRnaPRW4LTJD7S2dBRrOrX7lo2u2e+A/0Z9Q4Bc2fyoRN/7IWgAAAABJRU5ErkJggg==" v:shapes="_x0000_i1025" width="116" /> </span>where <span style="font-family: "Calibri","sans-serif"; font-size: 11pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: SimSun; mso-fareast-language: ZH-CN; mso-fareast-theme-font: minor-fareast; mso-hansi-theme-font: minor-latin; mso-text-raise: -5.5pt; position: relative; top: 5.5pt;"><img height="20" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABoAAAAUCAMAAACknt2MAAAAAXNSR0IArs4c6QAAAGBQTFRFAAAAAAAAAAA6AABmADqQAGa2OgAAOgA6OgBmOjqQOpDbZgAAZgBmZjo6ZpCQZpDbZrb/kDoAkDo6kGZmkNv/tmYAtmY6ttuQtv//25A629uQ2////7Zm/9uQ//+2///b8qAIWgAAAAF0Uk5TAEDm2GYAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJlAE1pY3Jvc29mdCBPZmZpY2V/7TVxAAAAkUlEQVQoU81QyxICIQyjrk987bqgomX5/7/ctIzIKCdPZoYZaEiT1pjfEHdPCD0Bq3cHrWZq2vc4S7kqKipuHN7rvkGxCET6rfLi06TSIP95ga6fXmplfCuGWjGJNA1k4/ZWwgdYMVlhLi6NR1fCswycjRh8sPVcZQ9Xp+Vq5BeFftPh9Dg3KBOouyNpXt8/YAYsOQziR2uAswAAAABJRU5ErkJggg==" v:shapes="_x0000_i1025" width="26" /></span><span style="mso-spacerun: yes;"><span style="font-family: Calibri;"> </span></span>is the number of patterns containing <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span> in database D." <em>emphasis added</em>.<br /><br />Say what?!? For those of you out there who are not fluent in mathspeak, I will translate. For a pattern named Beta to be a core pattern of pattern Alpha it needs to meet a couple requirements.<br /><br /><u>Criteria 1</u><br />Beta needs to be a subset of Alpha (<span style="font-family: Symbol; font-size: 11pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-font-family: Calibri; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-char-type: symbol; mso-fareast-font-family: SimSun; mso-fareast-language: ZH-CN; mso-fareast-theme-font: minor-fareast; mso-hansi-font-family: Calibri; mso-hansi-theme-font: minor-latin; mso-symbol-font-family: Symbol;"><span style="mso-char-type: symbol; mso-symbol-font-family: Symbol;"><span style="font-family: Calibri, sans-serif; font-size: 11pt; line-height: 15.6933336257935px;">β</span><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;">⊆</span></span></span><span style="font-family: Calibri, sans-serif; font-size: 11pt; line-height: 15.6933336257935px;">α)</span>. In Venn diagram world this looks like the diagrams below. The line under that 'c' looking symbol means that <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> can actually be as big as <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α </span>(kind of like a less-than-or-equal-to symbol), and therefore they can be the same set of items and still satisfy this criteria.<br /><a href="http://4.bp.blogspot.com/-JfGFhEKyMSk/VOvbgzD6XPI/AAAAAAAABzI/nG9-uRDyo2g/s1600/Alpha%2Bsuperset%2Bof%2BBeta.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" src="http://4.bp.blogspot.com/-JfGFhEKyMSk/VOvbgzD6XPI/AAAAAAAABzI/nG9-uRDyo2g/s1600/Alpha%2Bsuperset%2Bof%2BBeta.jpg" height="128" width="200" /></a><a href="http://4.bp.blogspot.com/-xPiGmjvHZMY/VOvdP0JD2lI/AAAAAAAABzU/cB20dPMXGkk/s1600/Alpha%2Band%2BBeta%2Bsame%2Bset.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" src="http://4.bp.blogspot.com/-xPiGmjvHZMY/VOvdP0JD2lI/AAAAAAAABzU/cB20dPMXGkk/s1600/Alpha%2Band%2BBeta%2Bsame%2Bset.jpg" height="128" width="200" /></a><br /><div><br /></div><div><u>Criteria 2</u><br /><div class="separator" style="clear: both; text-align: center;"><br /></div>That complicated equation <a href="http://3.bp.blogspot.com/-i63pCTpjyTI/VO0rbmS_bMI/AAAAAAAAB0Q/tEG4f4YPZSk/s1600/Core%2BPattern%2BEquation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" src="http://3.bp.blogspot.com/-i63pCTpjyTI/VO0rbmS_bMI/AAAAAAAAB0Q/tEG4f4YPZSk/s1600/Core%2BPattern%2BEquation.jpg" /></a><span style="text-align: center;">needs to have a value greater than </span><span style="font-family: Calibri, sans-serif; font-size: 11pt; line-height: 107%; text-align: center;">τ</span><span style="text-align: center;">. What is tau, other than a greek letter? It' just a value between 0 and 1 that the user of the algorithm picks so that there is a criteria to rate patterns against. If the calculated value in the equation is really high, we're more certain the pattern being considered is a core pattern, if it's really low...not so much. That explains tau, but now we need to understand the equation.</span><br /><br />|D<span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α|</span> is defined as "the number of patterns containing <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α </span>in database D". Although the symbols might be confusing, this is actually relatively easy to calculate. Just count up all of the records, transactions, etc. in your database that have the pattern <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>. You need to make sure that your looking at sub-patterns too. Say one transaction has ABDFE, and you're looking for ABF (this is <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span> in this example), this transaction counts for this calculation. The same process is followed for <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> but based on the first criteria, <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> is going to be a smaller subset pattern of <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α </span>or the same pattern as <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>. So in the quick example above, <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> might be A, AB, AF, BF, or even ABF. The whole point of this ratio is to give a measure for how important <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> is in the make-up of the bigger pattern <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>. If there are 100 records that contain <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>, and 250 records that contain <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span>, then this core pattern "strength", as I'll call it, would be 0.4 (P.S. don't get confused by the Venn diagrams above here. Those diagrams are for the items in the patterns themselves, NOT for the frequency of the patterns in the database you're mining). If tau were set by the user as 0.25, then <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">β</span> would be considered a .25-core pattern of <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span>.<br /><br /><u>Example</u><br />Now let's use an example to clarify this and show how to calculate core pattern robustness. Suppose we have a database with some patterns that have the counts as shown in the table below<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-7YUZIX7WZGI/VO0hjulDADI/AAAAAAAABzs/GDZZ-qjrTSY/s1600/Core%2BPattern%2BExample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-7YUZIX7WZGI/VO0hjulDADI/AAAAAAAABzs/GDZZ-qjrTSY/s1600/Core%2BPattern%2BExample.jpg" height="172" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">Let's assume that we set tau to be equal to 0.5. So we'll need to do the core pattern calculation explained above for all of the subsets of each item in this list to see if they are a core pattern. For the first pattern, ACE, there are 10 instances in the database (this one is simple, but others are a little tricky). Now we figure out the counts for A, C, E, AC, AE, and CE in the database. A=20, C=40, E=10, AC=20, AE=10, CE=10, and we already know ACE=10. So, for each of these patterns we can calculate the ratio ACE/A=0.5; ACE/C=0.25; ACE/E=1; ACE/AC=0.5; ACE/AE=1; ACE/CE=1; and ACE/ACE=1. So based on our criteria, tau = 0.5, we have core patterns of A, E, AC, AE, CE, and ACE; C got cut out because it only had 0.25.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">BCD on the next line is similar, but for BCD the count is 20 because BCD has 10 transactions in the 2nd row and 10 transactions in ABCDF in the 4th row; told you there were trickier ones. Follow the same process as the above paragraph for the rest of the calculations. If you do this, you should come up with a table that looks something like this. </div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ThChVDa1-2k/VO_hXUz7IhI/AAAAAAAAB1Q/N8im0X5-3rA/s1600/Core%2BPatterns%2BFound.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ThChVDa1-2k/VO_hXUz7IhI/AAAAAAAAB1Q/N8im0X5-3rA/s1600/Core%2BPatterns%2BFound.jpg" height="225" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">As an exercise, try to figure out which patterns didn't make the cut. There are three of them, and I already showed you one of them already. Obviously if the value of tau were set lower, we would have cut out a lot more patterns and the table would have a lot fewer entries in the last column</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><u>Core Pattern Robustness</u></div><div class="separator" style="clear: both; text-align: left;">Now, we need to explain pattern robustness. if you have the information above already, pattern robustness is pretty easy to figure out. In mathspeak, and example of how robustness might be stated goes something like this "A pattern <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α </span>is (d, <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">τ</span>) robust if d is the maximum number of items that can be removed from <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">α</span> for the resulting pattern to remain a <span style="font-family: Calibri, sans-serif; font-size: 14.6666669845581px; line-height: 15.6933336257935px;">τ</span>-core pattern." (Han's textbook again). All that really means is that you're trying to answer the question, "how many items can I remove from the pattern I'm looking at and still have what's leftover qualify as a core pattern?". For the first pattern in the table above, ACE, we have 3 items A, C, and E. If we take away 1 item, say A, we can see that CE is still a core pattern in the table. if we take away a 2nd item C, E is still a core pattern. If we took away E, we'd have nothing so that's no good. By definition the null set, or nothing, can't be a core pattern. So for ACE we were able to take away 2 items and still have a core pattern. So, if we were to translate that example to mathspeak we would be able to say that pattern ACE is (2, 0.5)-robust. Using similar logic we could add another column to our table stating robustness of each of the rows. Please double check me...at the time of this writing it is getting late and my mental powers are waning. :)</div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ST1F6Rje7Z0/VO_heRge3vI/AAAAAAAAB1Y/eXCwqdQCobI/s1600/Core%2BPattern%2BRobustness.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ST1F6Rje7Z0/VO_heRge3vI/AAAAAAAAB1Y/eXCwqdQCobI/s1600/Core%2BPattern%2BRobustness.jpg" height="248" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">I hope this helps people out there understand something that took me quite a while to understand from just a couple of sentences in a textbook.</div></div><div><br /><br /><span id="goog_1044054205"><br /><br /></span><br /><ol></ol><div></div></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-7577443429795096542015-02-17T21:20:00.004-05:002015-02-26T21:12:04.658-05:00Closed ItemsetsWhen we are looking for patterns in data sets, often the amount of patterns that can be found can be huge. To demonstrate this I'll use a simple set of examples. Lets say that you have one transaction with items A and B (I know this wouldn't give you much information, but it proves my point). In this transaction there are <u>3</u> patterns/combinations: A, B and AB. Now let's say that the transaction has A, B, and C in it. This transaction allows for <u>7</u> patterns: A, B, C, AB, AC, BC, and ABC. If it had A, B, C, D, then there would be <u>15</u> patterns: A, B, C, D, AB, AC, AD, BC, BD, CD, ABC, ABD, ACD, BCD, and ABCD. You can see that the number of patterns that can be created grows MUCH faster than the number of items in the transaction. This growth pattern is based on the mathematical concept of <a href="http://en.wikipedia.org/wiki/Combination" target="_blank">combinations</a> (wikipedia).<br /><div><br /></div><div>Also, any data set that anybody would actually care about would have more than one transaction, probably thousands, if not millions or billions. This explosion of complexity that comes from all of the possible patterns that <i>can</i> be found in data is the motivation for ways of compressing the possible patterns/combinations. That is what closed itemsets are all about.<br /><div><br /></div><div><h2><b>CLOSED ITEMSETS</b></h2>Now I'm going to have to get a little technical here, but I'm going to translate for you too so don't worry. The definition of a closed pattern goes like this, "A pattern (itemset) X is closed if X is frequent, and there exists no super-pattern Y <span style="background: white; font-family: 'Cambria Math', serif; font-size: 13.5pt; line-height: 19.2600002288818px;">⊃</span><span style="font-family: 'Times New Roman', serif;"> </span>X, with the same support as X" (J. Han, <a href="https://www.coursera.org/course/patterndiscovery" target="_blank">Pattern Discovery in Data Mining</a>, Coursera Lecture Material) </div><div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Let's break this definition down. "itemset" is just a group of items. It could be one item (A), 2 items (AB), 3-items (ABC)... k-items. People sometimes refer to itemset size by telling you what k equals. k is just a variable that represents the size/complexity of the pattern you're looking at. By, "frequent", the definition is saying that this itemset, X, you're looking at meets the minimum <a href="http://dataminingdiscovery.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">support</a> requirements, so it might be interesting. The whole bit in the definition about "super-pattern Y <span style="background-color: white; font-family: 'Cambria Math', serif; font-size: 18px; line-height: 19.2600002288818px;">⊃</span><span style="font-family: 'Times New Roman', serif;"> </span>X" Basically means that all of X is contained in Y. In Venn diagram world, if Y is a super-pattern of X then it looks like this</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-eGvNapDKcdk/VOPtnk1eR_I/AAAAAAAAByI/rRaevQdxvCg/s1600/Y%2Bis%2Bsuperset%2Bof%2BX.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-eGvNapDKcdk/VOPtnk1eR_I/AAAAAAAAByI/rRaevQdxvCg/s1600/Y%2Bis%2Bsuperset%2Bof%2BX.jpg" height="128" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">supper-pattern Venn diagram</span></div><div class="separator" style="clear: both; text-align: center;"><br /></div>In our example above, an example of this "super-pattern" notion would be like saying that ABC is a super-pattern of AB. The notation Y <span style="background: white; font-family: 'Cambria Math', serif; font-size: 13.5pt; line-height: 19.2600002288818px;">⊃</span><span style="font-family: 'Times New Roman', serif;"> </span>X is a fancy mathematical way of saying the same thing. The way I think about a closed pattern is that it's the biggest, most complex, pattern I can find at various levels of support. It isn't exactly like that, but it will become clearer with an example.<br /><br />Let's suppose we have a simple set of transactions like this</div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-EhshuArdPNI/VOIfbyc_LOI/AAAAAAAABtc/mtGiHBLD6ws/s1600/Closed%2BTable%2BExample%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-EhshuArdPNI/VOIfbyc_LOI/AAAAAAAABtc/mtGiHBLD6ws/s1600/Closed%2BTable%2BExample%2B1.jpg" height="88" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-small;">Simple transaction table</span></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">If you are looking for patterns that have <a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581#editor/target=post;postID=6660876122842681093;onPublishedMenu=allposts;onClosedMenu=allposts;postNum=2;src=postname" target="_blank">support</a> greater than or equal to 1, there would be a lot of combinations possible for just 2 transactions. Based on the work we already did above, there are 15 combinations for the first transaction. Transaction 20 also has the 15 combinations, but there is some overlap in possible combinations in transaction 10 and 20. To visualize this, lets list them out in a table.</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-SrhbMNYq6KY/VOInUuFlMGI/AAAAAAAABuU/xsINgZd6aQ0/s1600/Closed%2Bcombinations%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-SrhbMNYq6KY/VOInUuFlMGI/AAAAAAAABuU/xsINgZd6aQ0/s1600/Closed%2Bcombinations%2B1.jpg" height="71" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;"><br /></span></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-small;"><span id="goog_1462603380"></span><span id="goog_1462603381"></span>Combination Table</span></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"> If you look closely at the combination table above, you can find some repeats if you compare transaction 10 and 20. To make this easier to see, I've rearranged the entries a little bit here.</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/--F5jp47oc6w/VOIpMEWC0eI/AAAAAAAABug/O7ELfHZPX_4/s1600/Closed%2Bcombinations%2B2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/--F5jp47oc6w/VOIpMEWC0eI/AAAAAAAABug/O7ELfHZPX_4/s1600/Closed%2Bcombinations%2B2.jpg" height="67" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;"><br /></span></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-small;">Expanded Combination Table</span></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The green highlighted combinations are found in both transactions. Based on this, there is support of 1 for every combination in the expanded combination table, except for C, D, and CD which have support of 2. This is where closed itemsets come in handy. If you remember the <span style="background-color: white;"><a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581#editor/target=post;postID=8190271058154186024;onPublishedMenu=allposts;onClosedMenu=allposts;postNum=1;src=postname" target="_blank">Apriori Principle</a></span>, if a more complex combination meets the minimum support requirements, then the subset, or simpler patterns, will also meet those requirements. In this example, if I make the statement that I know that CD has support of 2, then I automatically know that C, and D also have support of at least 2. In this case C and D have a support of exactly 2 so CD is one of my <u>closed patterns</u> in this data. Let's revisit the definition of a closed pattern to make sure that's true. There are 2 requirement for a pattern to be a closed pattern</div><div class="separator" style="clear: both; text-align: left;"></div><ol><li>It's frequent; yep CD meets that criteria</li><li>No super-pattern with the same support as CD; There are bigger patterns in the data, but none of them have support of 2 so we're good here too</li></ol><div>What's so awesome about this is that I can say I have closed pattern CD with support of 2, and I automatically know that C and D also have support of 2 because of the apriori principle. We can use this one close pattern to compress three patterns without losing any information!</div><div><br /></div><div>Now let's look for some other closed patterns in the data, so you can get the hang of it. The way I explained closed data sets before was that they are normally the most complicated patterns that are frequent. In our table above we've got ABCD and CDEF to work with that seem to fit that criteria. So which one is a closed pattern? Both actually. Let's look at the 2 rules for ABCD</div><br /><div><ol><li>It's frequent; in our example we only need frequency of 1, so we're good here</li><li>No super-pattern with the same support as ABCD; CDEF has the same support as ABCD, but CDEF is not a supper pattern of ABCD. </li></ol>The same logic is applied to CDEF. Going back to Venn diagram world the whole set of closed patterns in this data looks like this, where S is the support for each closed pattern.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-lmvXy6P22LY/VOP12JCzexI/AAAAAAAAByY/mhN3NJw6wfc/s1600/Closed%2Bpattern%2Bvenn.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-lmvXy6P22LY/VOP12JCzexI/AAAAAAAAByY/mhN3NJw6wfc/s1600/Closed%2Bpattern%2Bvenn.jpg" height="156" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">Closed pattern Venn diagram</span></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">If you use the Apriori principle, and these 3 closed patterns, you can reconstruct the entire data table we started with without losing any information. Pretty awesome right?</div><div class="separator" style="clear: both; text-align: left;"><br /></div><h2 style="clear: both; text-align: left;">MAX-PATTERN ITEMSETS</h2><div class="separator" style="clear: both; text-align: left;">There is another way to compress pattern data called max-patterns. They are almost exactly the same as closed patterns, except that we don't care about whether other patterns have the same support. A max-pattern still has to satisfy the minimum support threshold thought. The easiest way to explain this is by using the example above for closed patterns. In that example we have closed patterns ABCD, CD, and CDEF. CD is a distinct closed pattern because it has a support of 2 and the others only have support of 1. This is despite that fact that it is actually a subset, or part of, ABCD and CDEF. If you're looking for max-patterns you don't care about this distinction so you only get ABCD and CDEF as max-patterns; that's because CD is part of the larger patterns ABCD and/or CDEF. A max-pattern will give you more pattern compression, but you can see that even in this very simple example, you're losing some of the information that was originally contained in the raw data because now we don't know how much support CD, C, or D had.</div><br /><div><br /></div></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com2tag:blogger.com,1999:blog-1030543525516570581.post-81902710581541860242015-02-17T19:59:00.001-05:002018-05-18T14:58:51.622-04:00Apriori Principle<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2829240478141584", enable_page_level_ads: true }); </script><div dir="ltr" style="text-align: left;" trbidi="on"><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a><br /></div>When making sure that all of the patterns in a set of data meet the minimum <a href="http://dataminingdiscovery.blogspot.ca/2015/02/support-and-confidence-in-pattern.html" target="_blank">support</a> requirements, we want to find all of the patterns that are supported, and not waste time looking at patterns that aren't. This seems simple, but in much larger data sets, it can become difficult to keep track of which patterns are support and which aren't. The Apriori Principle helps with this.<br /><br />To explain, let's use the data in this table and assume that the minimum support is 2.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-HkdpFOMSM0M/VOJO3BIGwII/AAAAAAAABvc/dMHweJu_Hko/s1600/Apriori%2BTable%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="137" src="https://4.bp.blogspot.com/-HkdpFOMSM0M/VOJO3BIGwII/AAAAAAAABvc/dMHweJu_Hko/s1600/Apriori%2BTable%2B1.jpg" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a><br /></div>We start by looking for single items that meet the support threshold. In this case, it's simply A, B, C, D, and E, because there is at least 2 of each of these in the table. This is summarized in the single item support table below<br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ZvC1HoxV60g/VOJO-DaIlHI/AAAAAAAABvk/3G6mcw0ubHE/s1600/Apriori%2B1%2BItem%2BSupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://1.bp.blogspot.com/-ZvC1HoxV60g/VOJO-DaIlHI/AAAAAAAABvk/3G6mcw0ubHE/s1600/Apriori%2B1%2BItem%2BSupport.jpg" width="171" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">single item support table</span></div><br />Next, we take all of the items that meet the support requirements, everything so far in this example, an make all of the patterns/combinations we can out of them; AB, AC, AD, AE, BC, BD, BE, CD, CE, DE. When we list all of these combinations in a table, and determine the support for each, we get a table that looks like this.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/--ZCU2TrpO4w/VOJO-PfNuUI/AAAAAAAABv0/GBhVl83leSc/s1600/Apriori%2B2%2BItem%2BSupport.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/--ZCU2TrpO4w/VOJO-PfNuUI/AAAAAAAABv0/GBhVl83leSc/s1600/Apriori%2B2%2BItem%2BSupport.jpg" width="153" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">2 items support before filtering</span></div><br />Several of these patterns don't meet the support threshold of 2, so we remove them from the list of options.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Dbo1KyzFkKc/VOJO-IUaA_I/AAAAAAAABv4/jhMNCe14VUg/s1600/Apriori%2B2%2BItem%2BSupport%2B(2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://4.bp.blogspot.com/-Dbo1KyzFkKc/VOJO-IUaA_I/AAAAAAAABv4/jhMNCe14VUg/s1600/Apriori%2B2%2BItem%2BSupport%2B(2).jpg" width="171" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">2 item support table</span></div><div class="separator" style="clear: both; text-align: center;"><br /></div><a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a>At this point, we use the surviving items to make other patterns that contain 3 items. If you logically work through all of the options you'll get a list like this: ABC, ABD, ABE, BCD, BCE, BDE (Notice that I didn't list ABCD, or BCDE here because they are 4 items long). <br /><br /><br />Before I create the support table for these let's look at these patterns. The first one, ABC, was created by combining AB and BC. If you look in the 2 item support table (before or after filtering), you'll find that AC doesn't have the minimum support required. If AC isn't supported, a more complicated pattern that includes AC (ABC) can't be supported either. <strong>This is a key point of the Apriori Principle</strong>. So, without having to go back to the original data, we can exclude some of the 3-item patterns. When we do this, we eliminate ABC (AC not supported), ABD (AD not supported), ABE (AE not supported), BCE (CE not supported) and BDE (DE not supported). This process of removing patterns that can't be supported because their subsets (or shorter combination) aren't supported is called pruning. This pruning process leaves only BCD with a support of 2.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-hmluntz9czI/VOPh5wFC9EI/AAAAAAAABxw/zfBWhxeLKd4/s1600/Apriori%2B3%2BItem%2BTable.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="65" src="https://4.bp.blogspot.com/-hmluntz9czI/VOPh5wFC9EI/AAAAAAAABxw/zfBWhxeLKd4/s1600/Apriori%2B3%2BItem%2BTable.jpg" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: xx-small;">3 item support table</span></div><br />The final list of all of the patterns that have support greater than or equal to 2 are summarized here.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-P6lO6Vfg5r8/VOPh_1abGlI/AAAAAAAABx4/kLkO1VxN2Vc/s1600/Apriori%2BItems%2Bsupported.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://4.bp.blogspot.com/-P6lO6Vfg5r8/VOPh_1abGlI/AAAAAAAABx4/kLkO1VxN2Vc/s1600/Apriori%2BItems%2Bsupported.jpg" width="181" /></a></div><strong><br /></strong><strong>Takeaways:</strong><br /><ol><strong></strong><li>The Apriori Principle can be used to simplify the pattern generation process when mining patterns in data sets</li><li> If a simple pattern is not supported, then a more complicated one with that simple pattern in it can not be supported (e.g. if AC isn't supported, there is no way that ABC is supported)</li><li>You can also look at takeaway 2 in the opposite direction. If a complicated pattern meets the minimum support requirements, all of the simpler patterns that can be created from that complicated pattern must be supported. (e.g. if ABC is supported, then AB, BC, AC, A, B, and C all have to be supported)</li><strong></strong></ol><strong></strong><span id="goog_1661146082"></span><span id="goog_1661146083"><br /></span></div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-66608761228426810932015-02-12T21:15:00.001-05:002015-05-09T09:46:54.865-04:00Support and Confidence in Pattern Discovery<div style="border-image: none;">When analyzing patterns in data, what we are really looking for are patterns that are interesting. There are subjective ways to determine if data is interesting, but data analysis can be sped up significantly by creating objective measures for "interesting". When looking for patterns, associations and correlations in data, many algorithms will use the objective measures of <strong>support</strong> and <strong>confidence</strong>. These concepts are easier to understand looking at an example.</div><br /><br /><div style="border-image: none;"><em><br /></em></div><div style="border-image: none;"><em>Example</em>: Suppose we have happen to have a small sample of the transaction records from a grocery store. The transactions might look something like this:</div><div style="border-image: none;"><a href="https://www.blogger.com/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://www.blogger.com/blogger.g?blogID=1030543525516570581" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="174" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAcAAAAD1CAYAAADH5SXZAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAADcaSURBVHhe7Z0LmB1FmfcrGpjcE0K4BAgQLppwi2wECUQj4ZOLgGAEPr4VEdbVNbIrIiygKDdFhUdAfDbC4qqAuPIQjMhFbgsKBgLBGO5EIEaWSy4ESDKBJANhvvnV9Dup6fQ50+ec7nPOTP9/89RT53T36a6uqq5/1ftW9fRr78AJIYQQBeN9USyEEEIUCgmgEEKIQtLv3sdebb/v8SXRVyF6D2dM29397O7n3Zur26ItQgjRM/+w80g37YAdXL9f//Fv7TfcvyjaLETv4eqv7u++dd1f3LIVa6MtQgjRM1MnjHZfPWq8TKBCCCGKiQRQCCFEIZEACiGEKCQSQCGEEIVEAiiEEKKQSACFEEIUEgmgEEKIQiIBFEIIUUgkgEKIXsHJn9jFByGyIpM3wRw6cVs3euTA6Ft5Fr+xxt0575Xom0ji9Gm7u/3GbeHeefc9N/uZZe4nty2I9jQWa3ziZXjsR3d0Qwb0j751snrtu27mn/4efcuHSt8EUyr9fZmksumN93/h5/Z2e43dzH9+YtGb7txfzvefhagGexNMJgL401P3d1sMHxB9K89rK9e6L17xUPRNJBHm5yuvv+1OmfGw/9xIEI+jJm3vP4dlGG5PYsVbbe75V1a5i254ItqSHZUIYKn092V6Khvq1h+fWJJ7RyULJIAiS/QqtAYz85sfdzefO9XHcd5oXRd9cu7VjkaqNzNi8KZunw+Mcv995sf8aKRZKVcefZVtNx/kjusoEyw4fZEilqmojEwE8O6/vOp+N+d/u8KiJa3RHuc/h/t+M/vFaE+x2aR/Z9ZbHHLWz+e5X/3hb+6q2/+ay8gpLxjtWTlTJ8J6MGhAf/fpMqORRlOuPPoC8bLBvA7c74ETtvaf+xp9vUxF7eTy3yB6MleYz9D8RHzff7ct3bp31ndr8BkxfGDbYa5lk/f77ys7HuJLZz3tPxvxcwE+tOEdIw/ON+/510v6O75yxDi39WadvkvO/fSLKxKPTZOOOOG5l7y5psuPZ+kNTVM0ShD6Zsxf9Yt7XvBxHNK0546deQzhNUJqyZ+QNCbQJNMi6WSUYY1QluarLEygacvDCMu1VP5VmufnHL+Xr1vl6lXauhpSrmy4JiNzKGUSrrSOQby+ltsH4bPFfYXWjzD/ktqUUnlaaZmK4pGpDzBOTwIY7qdifnKf7boaSL5T8RktMGqIQ8/14QWvdTUU4bkYNSX97k9PLe3WsPDQlTo/57CHrtxx8XQYpX7zdsc9XXfvQveZyTuU9Jeav69c/nH+w/fdzpsW43CN33bkn6UfqsmfJKoVQAjTQBr/8ZIH/OdayUIAy/mvQ/9rqXKFeBn1lOfUnRs7yojrHrjX1t1GKOz72V3PdzXS5a4b1tUkypUN4vHRPbbynxmpn3b1o/4zcM1q6xjPbyh05ery5V/ax43demj0bWPIi2O/90f/OTwP5bJZR9pK1eO0ZSqKS9P4AEPxM+h1WuXGdMPDy4MHHMsMySQYaSQ1FPHjD/6HbbqO44HgweQ6EM6YqzQd9DxLpYFt9ErbOh7qUrR19GR7gvNbw0QDQZoMrlHOp5M2f7LmoWeWRZ8601gqfY0gTXnEyxXB+OvLK33+Aw0zgpJEUp5Td9hOPYzXfb4fud+Y6Fv6uloJ3M+ekZgA5wwhbdXWsbQgaCZ+nJ802LNlvLWu+3cD32U8T8Hqca3PmCgODRdAHngebHqOjz633PceMbXQwOADO+nS2b7nyqjBHkR+Q+82Dtv53dEX3ufPZw0U2zEhGdY75Hz0BumVch2uF/ZeK03HIRM3NGjcE+kg8Ht+w8iW65G2EL4Twl54EvSYw/PTOyZNXMMaRfaTjiTYlyZ/siZucjKTWDOQpjzCcqWOsh0/LSM1Y+Ium0efusPvaNzJc3xvRng+9oUiRANvpK2rPcF5mBBC+PLhH+wSONIUnqfWOpaWXUZ3ih/n5PzcV2gZID+4z1KUq8e1PGOiWHTW9AbCA0CF5SE0/x9+BhqYeMOJ/d5IMp3wwPI74Hx/CyZhmP8khEYgFLD49SpNx3ajBkefnLv14ZeiT53n5SE3s1G84eJ7fFsSpc4P9z+xJPrk3FYjkgWm0vwpCj2VR5jvoY+acrXOUNKIBBgtmtmP+mTHA+Jn5+MYa8iT6KmuVgsm2LDzU2sdS4vl19KOTmYS5TpJaepxT2UqBDRcAP932VvRp43BrISpBJs+wXqNpXh91QYHOqx7J7lBsdmJ9BjxkTBFn+uUIm06rOcMWTVQIeXOHz7cpRrjtPkjuhPmu9UBCzaSgiSrROuaZDMexJe42AgrpNK6WgrObaMghDe0YmBmNZNmrXUsLWbuHBMIbijE4WSYOKrHIisaLoBJHPvRHf3aHZz0+Fcw3xBqfegMzCDWsADn5To0LqFvo5J0JDV+WZL3+fOEfAzpzT1xqwMWQsHI477S1tWeYHRJ+giMOrFIhILLLOx61rF5L7zuY+7HTLMIMZDWPzy+YbQpRF40pQAy680aFnwB1nMNG4JaoWHBj4IvwUxPPIyhb6OSdIRm0TzozaKx//gNk2ySRjm9CasD8YA/Ki/S1NVqCK0vmw9ryaSOjRzaEn0qz06RFSU+8YWRKTNk87CgCBGn6QSQ0YKNsHgY8I1Yz7WcOakaeMjwJfDAGUMGbuLjStPBuUIfTrw3XeusOSh3/nAWYuhnajSkK/STPhlM9ugthPlOR8fqQRjyplxdrZbtt9xgfjSzYq11bMsR3ZcfhNcweBZsog8jQZvMQhz6yoXIm6YTwNa334k+OTe4ZYOpEUEaP2Z49K02ZpyyX7eHO2k6eTXpeHn5hh71J/bexvs0uA4+my8csmu3BiSERckcF29w4oTnnxJMXiAOlzIseGll9Km+kE/cq4VrTp/ctdYMaDR7Wm/YDMTLI8z34z8+diMzOccT8iBNXU0Dlgy7J9Ia91/aUpVq6hgL2I2dOjo71HMC5R9ewxg6aINwM3v24n+a6DtJVm/smlmS9hkTxaLpBJCerpnJGIHhg+NB+uyBO3Xzt1QLDRa9TyYUmO/BFgvD/IVv+LiadNw1b8MrpvgNPg3Ojc8m/htmshm8kYPj8MOUIzw/DQvnN9+JnR/zbKNEhnvmXi2EjR/3S+++WSlXHvF8ZxmB1R3qA8dvEyxdyIq0dTUNpJvfEkgv/kuDBeTUd6imjvHZfsMxdHoI/N62hzDCMzcCdeaD2w3vVm+4Fs9arVTzjIliUbui9ACvKYoTbkvaf/vcl7tMLDxQPEj4CnjzhWG/K3eupH3hwxfC+WkIwlc9VZIOoBHBRJVkHoo3Gkwxj/vDzI9Y6p44P2vPks5PQ4OfMr7OqdS5oNy+coQLjVnbWAruj/vGvMVSl2YhaaF0ufKwfA8b1BB+9+fnlkffyudr2ryDSupqEmnKBt9iWC+rqWNAvY+LHT7LcK1kmBebRb5COyeB480nyLPGmkSjXJ6W2leuTIWAXF6FlhWYmlgPxIOch1/AzCE9nb+adNhvoJyPiF4+Zi0eTOuFpyVt+vsqlf4/wDSkKY/QjFYP/x80qqwrva4dXy5fMHMy0gMW4odCzu9tlIsIZ2E1qOUZE32TXN8FKkQ9yEMARf7gfzQTbFwA8dVhrgRGqEmjTSFqRf8PUAjREP6+dHX0qfNdp/j7EEXWNpr4YRrFHylEnkgAhRB1hYX4+PzMZ4i/jxEhE2IAXyu+Q5krRd7IBCp6LTKB9n7MP2fUy6cqio1MoEKIhsPEGkTPghD1RAIohBCikEgAhRBCFBIJoBBCiELS78m/v9n+1N973wuKhTjyI2PcfY8vdm+tLf9GFSGECBm79RD3kQ9u4fq1dxBtE0IIIQqDTKBCCCEKiQRQCCFEIem3+u229o4QfRWi9zBqs0HuzZVr3Pr3ZMUXQqRnYEt/N2xIi+u3ZPnq9o4QbRai9zB+p1Fu4UtvurbYv8cRQohyjBw+0I3ZephMoEIIIYqJBFAIIUQhkQAKIYQoJBJAIYQQhUQCKIQQopBIAIUQQhQSCaAQQohCIgEUImMWv/qKe+P15dG3DZTaLrqjfBL1QgKYMzzMq1tbo2+ikVAO1ZQHvyEkMffh2e7FRQujb53HHjZ1orvtlpnRlk5Kbc8Cu68w9GayyKe+liciHySAOcEDd9H5Z/mHedZN10dbRSM558xTfHlM3mdXL1xpmHHFxf43hCcemxdt7YRzfOmkY9xRhx0QbWkMdl9h+NC4rX39i6e5KChPRBokgDlw/x/u9g/czBuujbaIZoBygSkHHux+de3V/nM5nlvwtPvplZf742Ht2jU+NnbZdZz7+lnnuSuuvC7a0hjsvq6+5qau8INLr/L178TjD08t9n2JeuTJdb+40otqKXNtT/tF45EAZsyp00/0gUbxd3c8GG0VzcRhR0zzDSQCV46H5zzgY45PYuTmo9yJJ0/vEshGs+9+k7vCoYcf7a674Xa//Z47b/VxEalHnqxbty76lExP+0XjkABmDI3l7Eef943ipi0Doq2imZiw9z4+vueu23ycBD6kyy6+wJ37nR96oSsFpu5KfYpxGCHkMUrY60MTfZxkiQh9Ylyb70lpiPvSeiI8Nk2+hMfXg57ypFSa4/v43rpqlf/82rIlG91DT/tD0uZxuK9cmYn0SAAzhl7mkKFDo2+iGRm9zbbu2OM/782bpRqQ2Q/c6+OJH57k4yRogDB11+LjvfP2m93UA/Zwv77+Z9GW7ImPUC3dmAEZBXN9vhOHzJp5vfeXss8CPrSkPOM+MPeFx/JbM0XG4Ryhf5VQ6tx5UCpPSpVluI984zv1BzCp2j1wnp72h6TN47RlJipDAigKyZFHH+fjUr6gO26b5UVyh7E7R1uyh4bs7NO/7L44/TR3yqlnRVuzA1GCaced4OM4jz7yoDvu6IP8KNd8ZAb+qwu/fYbfd8d983zgGEZOF3zr69FRnZCH5Be/t2NvvPleLzK4A5JMzQg+AsH5cRVwPMTPnTU95Ukaxmw/1ucF9QO4b/Mz0rnqab9RSR4b5cpMVI4EUBQSM4XRcMehwWbk8olDj4y2ZA/XoCHLUvwQIgJpZ3SFuNJAlvJRIkDsn3bsCV0+MiBtZv5lH402gWNodDl/OJOS7RddMsP/3o79wLjdu+7rqSfn+9jg/FybCUScn04Gx59z/sU+zpJK8yQNlhdjdtjRf8ekzncC9LQfKs1jo1SZieqQAIrCQkNCQ0NjFDLzhs5ZnbvtPsHHWcO6wazFD1iSQWDUZeY3BB4BSIIRSlIDaoI16YADfRzC6AYemz/Xx0aS2X/osBE+Xr26u1/Nzr/fpI/5OCTrkXCleVIvqsljKFVmojokgKKwjN9tTx+Hk2HwtWCCQhzz8OW+9OLf3WWXXOA/Z93YP7ZgSbdgZkUEIGk0YSOUOOanenz+o10jKAtsKwWTORD3no41QTSBzJNK86ReVJvHpcpMVIcEUBQWTG/0qBkZ2KQDa3xspmjW0IBN3LdzYo35o/IiNEPeevONPk6DjZQWvvBXt+DZJ7uFZcsWe9PluPGdnQeDkTSTOXgpgI26MDc2G9XmSdZUk8cieySAotCYn4+eNyMYGm1EEX9MXtjaQa4VN79mzTbbbu/jpGn/pcA0C/iZSGtSCP1ZiB8mRvxZLAGy0RaTOpqRavIkayrNY5EPEkBRaMzPh19o3p/n+M82QzRPzv72933MxIw8p/63tnauRatk0oeJf9u6tT7uidn3d5oVDz70qFRm4yFDOo9pXbXCxyH1WAZRaZ7kkaZK81jkgwRQFBoabEYuNooBmyGaJzSANtsvvgaQUaGNSGuFdWZQ6m02Seyx594+vu2Wm3xcLaV8WXb++IsIEJpS0//zzhMTJGZmhtfgc09rNHsSsaT9WeWxqA0JYMbQoLG+h2AP2ry5c7q28RCL5sIaI2DyS73AxIX44g+yugLMEMWHZiOVtFgds2AzHzG3VTJzED8Zv+G3LMqmzjJpg0kj+C05r03igMlTDvLxtT//iZ8EQ+D6SUtMgPOb75URMOfnuUH8Si2DqEeeWNlfcel3u9LES7VL8aG99/Xx9dde7Y8nb8LRYrn9leaxyAcJYMZgDqIXSaByAw+SbcPJLZoLGiMzh+U1+aUU+IAQAxZEx/2BLS0t0afyWNqtjlngvhhlVjPblN/wPttlSxd74eGtI7zRBL/llKkHd/ORcn3Eg/rOJBgCs13P++5l0REbM/3f/t1P9OA3nJ8GnxEZ1y1nmswzTxBE0oRvMJ6mJLAUhMfHJ/30tL+SPBb50G/J8tXtHSH6KkTvYfxOo9zCl950be+sj7b0fuj10xDScDIRohmwkQjiU+69qMCxaY4zMDEyqivX2Nc7T9KkKaSn49PeI1SSd6J6Rg4f6MZsPUwCKHovfVEAMdNhMufNKnqnbCfKE5E1JoAygQrRJOAfwkz3hX85VQ19hPJE5IkEUIgmgjeV1GMWam9CeSLyQgIoRJOA76fULMiiojwReSIBFEIIUUgkgEIIIQqJBFAIIUQhkQAKIYQoJBJAIYQQhaTfGyvXtHeE6KsQvYcdRg93r7zW6t59971oixBC9MzQwZu6LUcOdv3eXf9e+/r1akBE72OT/u93765f79rbow1CCJGC972vn+v//ve5fqvfbmt/a01btFmI3sPmIwa5FavWuI4+XLRFCCF6ZkBLfzdscIt8gEIIIYqJBFAIIUQhkQAKIYQoJBJAIYQQhUQCKIQQopBIAIUQQhQSCaAQQohCIgEUQghRSCSAQjSYl19+yS1/7bXomyhHqbxSHopqkADmBA/kvD/PdQtfeN61rloVbRXNBGVkoVFlxLUnThjvZt7462hLeWb/6X5fp5oN8q+afLT8TyJ+r6XyqtI8FMKQAGYMD+3nPnucfyA/echUt/9H9na7jN3GXX/dNdERotGEZWSBMvrB9y4s2Rg3A6T7M0cf7utUs/GV6f/clY+kMw3kt+U/ncWQZr5X0XeQAGYIjScP7ejRo91vbr7dh6t+2il8p5/2r+7uu+7wn0XjoNGljICysXK69PL/cJdfeolvjJuVceN2c+df+D33y/+eGW1pHu6+8/c+PvjQT7r/vGqG/1yOp5960uc3x8OaNd3/I00z36voO0gAM2RAywB33/1z3CU/vMJN/ugUHz497RjfwMLn/vFYH4vGQAeERvfzJ3/BXf6jGb5srJxOOPEk98KiV70QNiujttjCTT/lq+7gQw6LtjQf0z5znBdDBK4cD9z/Bx9zfBK94V5F70cCmCE8tLvvsWf0bQM7jt0p+uTkqG8Q+KasA/LVr53hyyrO0GHDvBAmYT4uC2mo5jch1JV4feE85fxslVwzPDarernPvh/x8a23/NbHSXDd88/9pu9sbJFQDgbpKnevaUjKQyEMCWCdSWp4Rf4899wCH592+pluu+3G+M9p+e2sm7xvy/xVhK02H1LWpF3Nb0L4/e7jxrr/+umV0ZYNkz2u/2WyPxk/c/yaZ55xaqIAxI/lWlyzVshbP8LuGGmXEp7/+Z+7fTxp/wN8nERP95qGpDwUIkQCWAdWrljhYxoG0RiefeYZH2PurIQrZ/zYffmLJ3nhfOiR+W7e48+63991n/ddMaJMErRqfhOC+dB+f/Y3z422lodr4mdmVMX1CJjer/3Fz9xpXzslOqoTJphwLD5QO5b0ZcX/Pf6zPv5Tickws35zo38Wdt5l12hL9lSTh6J4SADrgPk7PnXUNB+L+tPa2mlKGz16Gx+ngUYUU501ojTYjHAmfnhf70MEBC0001XzmxB+P3XKpIoabrsm4ocJl+sREHtEEJ9cOMvyySce9zE+UDuW9PE9CzgXIHRxSCvpyfNZqCYPRTGRAOYMDQ+NEz3eSkcfIntaBgyIPvXM/L/M8/GRn/q0j0MwZduEGTOvQjW/MVjzVk3Dbdf8+NSDfBxi/ue5jzzsYxg6dJiP40sPsoTRZdJkmGuv+S8fT5iQz/KGavNQFBMJYI7gA/nR5T/0n88861s+Fo1l3dq10aeeeeKJ+T7efvsdfBxnx7FjfRxO4a/mN7Bo0UJ3/nnn+M+VNtwvv/y/Pn507iPevBkGtsUxoWSdKqbTPBbW77XXBB+Hk2Hw62GSRRyZcJQ1teShKCYSwJxA/PC90AtmaYQmvzQHixe/Gn3qGRprqKSxruY3MHbszm7//Sf7z5VORmHCCfx1wTPevBmGJR33y3q6PSNBAkyeTy9Y5LdjnWCxOZNlEKiswPQbnwxjYmwzRbOmljwUxUQCmBOXXPzdLvFLWhoh6osJAKOitNgi7VKzGZOo5jeGX/fW8Xsmb8RNh+XA3AcnnHiyP0dSiJvf6ZCxnbWPjMgQ7m+cdXrNyw5CzM/HZBjOy30highwXnBP1eShKCYSwBzgbSM0KMysk/g1B+ZzYkSSdqRjo4mlS5f4OI6JaTixpprfhHz/4kt9/IPvfye1iG633fY+rsS8azBSZfKL+eySfJPVYnnOZJg5cx70n22GaJ5Uk4eimEgAMwbxo5GlQbHZcKLx0NDbBBRGOkkiyDZMgTYK+tiUA31sEzdCOJZyZrQRTuev5jchjI5s5mba9Wt7/8NEH6d9GXSSKAwePNjHAwYM9DEwgkKwqx0VWp5zL/YSgno8E9XkoSgmEsAMwe9AAwf4XphgEA+VmOBEtrBEAHMhDSOLrOmssCaPckP42LZ48eIu/x2jd45nNM9+yo7A8RwL519wkY+Nan4TB3MlwkFdSvMSdbsmx9s1EVtmeXJdXvwdCj6Lw6mL7Gc7eYBAIcyhxYLZlLw3deWqldGWyjFxBjqF9aLSPBTFRAKYE0wuSAq2Bks0BmYHMjqwCRo0/PiLED4a6J9c2X3kxvHmI0MMCLbAmgXkSSO5an4TB7EmjSxYT+PL4pq8OJr74HqILbM8ue4hh3yym9+N4x56aLbfz3HkARNibJ1iHN5xWy0IKsIKeU1+KUWleSiKR7/Vb7e1v7WmLfoqRO9h8xGD3IpVa9z699qjLZWBaY/RzfBhw1PN2rRRVNrjoZrf1IpdE+EqN/sYU+jadWsT08Y5EEeEkYklQvQlBrT0d8MGt0gARe+lVgEUpcFEyiiREXG9hFuIeiEBFL0eCWA+MDLET8gsZk3kEn0RE0D5AIUQG8H6VYmf6OtIAIUQ3cBvGM4GFaKvIgEUQghRSCSAQgghCokEUAghRCGRAAohhCgkEkAhhBCFRAIohBCikPRre2d9e0eIvgrRexg0YBO3dt07TuvghRCV0P/973Mtm76/800wHSHaLETvYdRmg9ybK/UmGCFEZQzkTTBDWly/JctXt3eEaLMQvYfxO41yC19608mCIYSohJHDB7oxWw+TD1AIIUQxkQAKIYQoJBJAIYQQhUQCKIQQopBIAIUQQhQSCaAQQohCIgEUQghRSCSAQgghCokEUIiMWfzqK+6N15dH3zZQarvoTiPySWVTTCSAOcED9cRj89yLixa61a2t0VbRSCgHyqXS8uA3hCTmPjzbl7HBcYdNnehuu2VmtKWTUtuzwO4rDL2ZPPIpzJt4+bOtmmvGy170PiSAGXP/H+52p04/0T9QJx5/uDvqsAPc5H12dbNmXh8dIRrFOWee4suF8qDxSsOMKy72vyHQoQnhHF866Rhfxo3E7isMHxq3tbvo/LM2SnPRoIzsebRA+VOutXQUmqXsRW1IADOEB4qHbeK+k9yNN9/r7rhvnrv6mpvclAMPdhd++wwvjqJxWP5THr+69mr/uRzPLXja/fTKy/3xsHbtGh8bu+w6zn39rPPcFVdeF21pDHZf1DULP7j0Kjfzhmt9Jyyt2Pc1EDlECsgPy5tzv/NDX66IYbU0S9mL2pAAZsjQocO86J148nT3gXG7u9HbbOv23W+yO+XUs/z+WTdqFNgMHHbENC8aCFw5Hp7zgI85PomRm4/yZW0C2WioaxYOPfxod90Nt/vt99x5q4+LBOWLyB17/Ofded+9zOeH5c20Y09wsx993gthtTRb2YvqkABmyJChQ73oxRk6bISPNQJsDibsvY+P77nrNh8ngZ/ososv8I0kjV0pknxKlcLkizwmYOz1oc4RDiPBOKH5j2vzPSkNcf9iT4THpsmX8Pis4LpYYuCfvvTVxPLjWUUIS2FpKlcu7I/fY/w+0pwHwnzOoy6IZCSAdaBt3Vof0xsVjYdOCmXBCKFUYzP7gXt9PPHDk3ycBI0VZrRZN1U/sr/z9pvd1AP2cL++/mfRluyJj1Is3ZhGGQVzfb4Th+C3xl/GPgv4FZPyjPvA7xgey29Ldfo4R+hfJZQ6d6X8beFzPv7i9NMSO6TlID9CnyF5wr3FSSr7MF+ZHBM/z3W/uDI6sjvxfC51TZE9EsA6cNstN/n4yKOP87FoPFYWpfxjd9w2y4vkDmN3jrZkD43t2ad/2TfUZibPEmtEpx2XPNJ59JEH3XFHH+RHueY3NGis8VuzD7O++bMZTV7wra9HR3VCHpJf/N6OxQeO8CICSaZmBJ8OCOf/3R0P+uMhfu5qeOH5Z328z0cqm6Ayb+4cnx9Tph7s04QJmXugjHoyl4eQr0yOsfNwb9QlLApxYSPvyOcw78x0LfJHApgxmDKo1AR6dtYA0HiYSUo0HisLGu44lBcjl08cemS0JXu4Bo1tluJn9Y60M7qi4aZhLeWnQoDYjynQ/IZA2sz8yz5GUebPph5z/nB2KdsvumSG/70diw/c7uupJ+f72OD8XJtJJJyfTgbHn3P+xT6uldWrO82SW2012sdp4b4QH0sTdaTUPZSDeyOfwnub/m//7vdRJqHZdMGzT/o4zDuua2Uh8kUCmDFtbev8zDNCOPNTdv3mg8af8qFBDpl5Q+fMvt12n+DjrME8lrX4gdU7Ol00woDAI4pJMCpJamitsZ90wIE+Dhmz/VgfPzZ/ro8NfGpxzPdtgmTY+feb9DEfh2SZH5u2DIg+pYPyiHdSt9l2ex/H76EcnIdOQQh+SAQfXl++zMcwZEhnvhV9uUqjkABmDBX9sQVLugImEHqA9PzolYvmYfxue/o4nAyDHwczH+KY1KjXyksv/t1ddskF/nOWjT2E9Y5gZkVEMamBHbPDjtGn7pAH8Pj8R7tGlRbYVgpGNoh7T8eamJhA5oX53tMydNiw6NMGrA5gHk1L0nlghx07zelLly72MVgng+UqmJ3JP1E/JIA5gwmEhs4mXdAwiOaAsolPhrFG22aKZg2iwzpRyHuiQ2iGvPXmG32cBhs9Lnzhr95EF4Zlyxb7kcy48Z2dB4ORNBM58H3ZSJROXyMJhaZWzJJTCwMHDoo+bQCT530PPuXzFLMz+cdkIOuEiHyRANYJ8yeZzV80B1YudEwYwdBoI4o0THlh68e4Vtz8mjVmwktaClEKTHiAD4u0JoXQxIc4YHbFZ8j6OhuBMqGjEZg4MxmlmSjlBrE1heQdlgdvgfjON7r5CkU+SADrhFV+s/mL5sD8fPjK5v2508xVj9m6Z3/7+z7GLJ6nf7i1dZWPS02EScLEP60Jcfb9nabWgw89KpXZ2J6B1lUrfBySRV5YmTKSbcRIqnVVZ57HmfdoZ/0yP2oc8g6frPmmbTmHyA8JYIbQm0/qtfFQ22zDPfbc28eiOaDRYeRioxiox2xdRMZmVMbXAFKPbERaK/YO2lJvs0nC6qgt36mWUj5AO3/8RQQ8J6WWQVSSJ1amwEgqSQTZhqkxj1EWwhs3b+PbY2RHRyS0LiQJ/sBBnabSAQMqm8QjKkcCmCG8Ogs/CM5sHlYCDRwPNTF2/iymeYtsCTsl9L7rBWZEGmoazPBl6cwQxYdmo7e0UO/CYLNBMWlWMq2eOspv+C0iQT1GMJhIQ8POeUNRmTzlIB9f+/Of+IaewPWTlpgA5zffKyPg8Dkp9XxUmieYb7kHzsvicq7DZ9LPPbFt2dLFqUaslWLmbfLA7s1emh2f+GQL5Mlb8pRjyV/OobYifySAGXLEp471DRrObJsIYKMKevvY+UXzQUNjJsK8Jr+UgoYaMWDJTNwf2NLSEn0qj6WdehcG7ot6V81sU37Di54RCeoxgsFMRRp2FniHoxiuT8cBQaOhJzDblXdwloJ1cXQI+Y09J4xSua7dTxJp8wQ4F/dvYss1SD/3RHpZu5iWcmmKwyQnZn8zc9Tujd8zKzcuauQxx5G35DHHki/l8k5kR78ly1e3d4Toq8gKTBvr1q3zD2y5d0mK6hm/0yi38KU3Xds766MtvR9GATSENILN0mGy0V6ausyxldR5TJCM6spNOsoiT+w6vLA+j1EfJKXT8q7c/YG1F3mmT2xg5PCBbszWwzQCzAsaACq9xE9Uwj133eJHC9OOKf2i5npDPU5blyut8zT2PYlDFnli16m3uFje9YS1FxK/+iIBFKJJYBSA6fIL/3KqGsII5YnIEwmgEE0EfqJ6zELtTShPRF7IByh6LX3RByh6N/j85MdrfuQDFEKIjJEfr3chARRCCFFIJIBCCCEKiQRQCCFEIZEACiGEKCQSQCGEEIWk37q2d9s1jVz0RgYN3MStXfuue6+9PdoihBA9s0n/97mWTfu7fmvWvdO+dt270WYheg/DhrS41W+1SQCFEBWx6Sbvd4MGbOL6rX67rf2tNW3RZiF6D5uPGORWrFrj1r8nARRCpGdAS383bHCLfIBCCCGKiQRQCCFEIZEACiGEKCQSQCGEEIVEAiiEEKKQSACFEEIUEgmgEEKIQiIBFEIIUUgkgEKU4eWXX3LLX3st+iZKoXwSvREJYB2gcSCI5sLKhdC6alW0dQNsnzhhvJt546+jLemY/af73cIXno++NR/c17w/z/VpTLrvSqk2n4RoNBLAnDnzjFN940CgoRCNB4H63GeP6yoXwi5jt3E/+N6FNZcR5/7M0Ye7/T+yd7SleUDw7L4/echUn0bu+8oZP46OEKJYSABzhMbw2l/8LPommgFEDoGCq356jfvNzbf7cOnl/+Euv/QSLw61MG7cbu78C7/nfvnfM6MtzQHCjuDdfefvfdrmPf6se+iR+T4Pzj/3m+qciUIiAcwJTEv/edUM3xgSROO5+647vMh9/uQvuMt/NMN9etoxbvJHp/hwwoknuRcWveqFsBZGbbGFm37KV93BhxwWbWkO/njfvT5G7EnbdtuNcTvvsqvPA+6b70IUDQlgTvzu5lm+t33scf8v2iIaCR2Sz/3jsf7zV792hheqOEOHDfNCWA5GSoRyEz7YH/etxUdYac4DnCftseVobe1Mz45jd/JxCPddCrs2oRp/YSXpr/VaQlSKBDAHeIBPP+1fvakpqaEV9ee55xb4+LTTz6xqtPP0U0928xvuPm6s++2sm6K9G6Ds2X/9L6+JtmzYZpNj4ucp5YO7/rprvI8uPDbpmmkYOrRT5B6d+4iPe4LrbLX5kK5rE0gLo+i0pE1/FtcSohokgDnw4x/90B186CebzgxWZJ595hkfY+6slIcemu2mTpnkDjnkk95v9vu77vPl++UvnuSFMS0IIH44O89998/x5lh8cHFh4Fg6Ufjo8NcRuG61TNr/AB+T5p6EhWvP+s2N3a5NWrlnRtFp7jlt+rO4lhDVIgHMGB5oJr6c/Y1vR1tEM2AmwNGjt/FxJWDKpvHGPIrfbOKH9+0q3/l/mefjNOB/xAdn59l9jz3dmWd9y+9DmEKz35NPPO5jfHSMWAlcl+/VwPVsYg7CwigUIUwyNdJJ+MmV/9Xt2qS1kntOm/4sriVEtUgAM4aJL5jZeIhF89EyYED0KT2UJ413yPbb7+BjE9Y0cJ74CBQTuU2SWrZsqY/BTJas18sKLBKMsLgeoo4QfmX6PydeI8kvOHzECB+nuedK0l/rtYSoFglghmDGomE54cSToy2i2Vi3dm30KT3Dh3c2xiHWaGMeTUvSeYDRGSxe/KqP4eNTD/Ix6/XwEWa1sJ4RFrNUQyHkGlgu4jA65LrsI6T1H0Kl6a/lWkJUiwQwI5jhhhnLevM2m42wcuUKv23pksX+u2gcochkAQJSK4MGDYo+bQChenrBIl+f8BHiO+SlClnVHxNCfG+A5SIE8ygTUbgu6yYJ1O+0VJL+Wq8lRLVIADNiwYLOSRY87OFsNgK+H6A3zPek3rbIlz33muDjZsz710osD7A1hazTQ6jwLX/jrNMzXSJgPjmE3JYpIEiYR1kTybWXvr7aB0aNlZAm/VldS4hqkABmBG8AsbeKxAMz/YBGgO8cK+rLhAmdryajM9KoUbhZAuLMeehPPk5aoweYWxEq6g9CZUs6ssaW7PzPPXf6+Kijp5VdI5iWcunP+lpCVIIEMCNoPJjgkBTGjt3ZH7PPvh/x362hEfWDxtXe8sIoJEkE2YaZLssRVgjiG1/ugN+LkRHT/jEbGkmLxgcPHuzjAQMG+hiefupJP6otl2b2cd2kY2ySinXSylGJXy5t+kshH6CoBxJAURhYfsBMTEYhmKJ5LygmOMTBXlq+ePHi3EYitnaQSSGIFtfG7wXxZTO2QB6BQpjNVMg5whnGrE/EZ7Zy1cpoSzJcFz8bi9O5NoH7xywPthwD/s8nDvXxjP/4kRdoAmlhvV5a0qY/i2sJUS0SwDpgU8JF4zn7m+d2maUZkdEoIw4IHyY61qRVAg16Wvbff7JfAP/QQ7O9aJkgsPA7vmyGNXscZ35jjmVCCe8wTWJAS+nlHQg610X8WZxuE024f87JZJXQKsFyCfKC/Qg0YdGihSWvnUTa9GdxLSGqpd/qt9va31rTFn0Vovew+YhBbsWqNW79e+3RlsrAJMjIafiw4bn6nxgBIQIIAJNCbBuEZs8kMCWuXbc2MY1J501D2msDxyKu1Zrty6U/Tq3XEiItA1r6u2GDWzQCFMWFBhkRyFP8SsF10wgQYlAqjbf+7rd+BHnC5ypbMpD22sBxtQhSufTHqfVaQlSKBFCIXggjK5bcfO20Mxoi4EL0BSSAQvRS8B3GX9EmhEiPfICi11KrD7Ce4N/K29cohEiHfIBC1JG0fjAhRP2QAAohhCgkEkAhhBCFRAIohBCikEgAhRBCFBIJoBBCiELSr72D6LMQQghRGPotWb66vSNEX4XoPYzfaZRb+NKbru2d9dEWIYTomZHDB7oxWw+TCVQIIUQxkQAKIYQoJBJAIYQQhUQCKIQQopBIAIUQQhQSCaAQQohCIgEUQghRSCSAQgghCokEUAghRCGRAAohNmLxq6+4N15fHn0TpSiVT8q/dDQ6n95/xpnfPH/1223RV1ErFOaKFW+61a2tG4V+HX+btrRER4pa2WKzQe7NVWvd+vcqe50tD92ihc+7tWvWuP79NylMmVAHX++on/F6SXinrc0NHDTIH0f+HDZ1ott8iy3chL338dt6G9wT91rpM8e989uhQzf+7/1zH57t3n3nHTdis5H+e6l8yjP/7L6s3Eqltbfw0X0+0JB6NnDAJm74kBaNALNm6gF7+MqfFGbddH10lGgELy5a6E6dfqIvixOPP9wdddgBbvI+u7rrfnFldETf5pwzT9moTlqg3vYl7F4pX4QrDTOuuLgrP554bF60tRPO8aWTjvF1ppEkleGHxm3tLjr/rI3SLHpGApgDxx7/eXf1NTdtFPab9LHoCFFv6JXTeN3/h7vdFVde5+64b5773R0Puh9cepW77OIL/P6+DvcOSXXzuhtu9/v6CnavUw482P3q2qv953I8t+Bp99MrL/fHw9q1a3xs7LLrOPf1s87zdaeRJJUhdXjmDdf6Tl1asRedSABzYMwOO7p995u8UfjAuN2jI0S9mfPgH3xMg0EjN3qbbd0OY3d2hx5+tJv96PP+e1FIqpt7fWhitLdvcdgR07xoIHDleHjOAz7m+CRGbj7KnXjy9C6BbDRh2VGHrQNzz523+likQwIoCsHq1a0+HrP9WB+HDBk6NPq0AfMFJVFuH7A/KSQR7i93PQMfM98bOXGAdIbpLgf7K83H8Jy13q/5lu656zYfJ0EasAKc+50feqErRbl7SQv3kUfZWQeGkWCctPlZSblCeGyafAmPbxYkgKIQDBnSKXKPz3/Ux+XgAcW3UspnW2ofowzzMSaF0Dx15+03e99NuB9/lZm4DEsLv+X85mNulM+OdJPOMN3cRzzdUE0+Zn2/jOxxSWDeLCU8sx+418cTPzzJx0n0dC9pIO+4j19f/7NoS/bER6hp83PWzOs3Klf8ikl5lrbuGpwj9K8SSp273kgAc6B11Spf8SyIxmON29mnf7nkg1oLlPNxRx/kzdz4FvEx3nhzZ8MKmF7xIwGN0R23zfK+G46zY2m8ENAkc92jjzzoz88oxfw+9YbJQuTfF6ef1nWPmN4s3Vnma5b3e+TRx/k47ICEUBaIJCbxvKBMLe9OOfWsaGt2IEow7bgTfBynXH5Srhd++wy/z+ojxzCavOBbX4+O6qSauovg0wHh/NQbey7i524EEsAcoLDD3g69JaugojHQuNkEBh5Ua7BrNWkZNrI84lPH+Gsx8kAM7ZpbbTW6y7yG3+aiS2Z43w3H2bHWMD715Hwfh1CnaHSmHXtCl9+nWsLOmYWe8oGGDTOhNeB2j5jezvvuZf4Y8jSr/Mzyfs08SMMdh/uiHnzi0COjLdnDNRCfLMUPISKQdkZXiCv5VcpHWSo/rVwRJ/ZZfeQYRJDzh7NLK627nJ9rM4GI81NvOP6c8y/2caORAGYMlSYMNIBUSioow37ROCgHeqw8jDzYNNhMK89i+viyZYt9HB9FDBzYubZu6dLO/UaS33HosBE+Nn9lCCOUWkQgJOycWejJtGcN2ycOOcLHIQg7DSj8beFzPq6VLO8XaPwpcxrkkJk3dHZQdtt9go+z5sVFCzMXP2BJBoE6jMAAAl9qlFsqP61cJx1woI9DzF/+2Py5PjYqqbt2/qQZ8FnmR7VIADOGHlIYaHQRQSogJgWt1Wks9FiZzRcKYRbTx7fccrSPafBCzM/BCDAOoyWOt958Of8kM4uzIt5JIxzxqWOjvck8+/STPt5m2+19HGe7MZ3piy8fqJYs7xfG77anj8PJMIx8eSYRx6RGvVZeevHv7rJLLvCfs27sH1uwpFswsyKimNTGlMpP8gCoe1YP09THtHXXBNEEstmQANYJM7HEe1OiMZgQ0vhBmrVi5aCzAzR4jDJoWBBX8/vER4bsY+IAaxOtN8+x9SDeSSOUm/0INrswD6GoB+Q/nVBGS9YpsUY7r7eQIDoT9+30PeftAgnNkLfefKOP02Cjx4Uv/NUtePbJbgGrBp3EceM7Ow9GI+tu1kgARaExsxAPdS2z0l54foGPaYgweWFWxDxFA/L5f/qK32dwLfZhNmQNovXiGZU2K+ZbaoaZe9VinVBGLIxgaLQRRTpDeWFrB7lW3PyaNTY6T1oKUQo6Z4B/jrQmBTpIRm+su+WQANaJNWve9rGZykTz0dMoCEoJACNIxI5eOA0DDQIxDUh81DT7/k5z1cGHHtVrRlQ2kln+2lIfx2GWISSZepNohJCanw9f2bw/z/GfbYZonpz97e/7mMkqed53a+sqH1tnJQ0m/m3r1vq4Jyqtu7b8qHXVCh+HNENnSgKYIaUKlO2zbuycZFDvl76KTn8FJqikGYrmL2EkYFijwOy48Dd8LreGa97cOf581hARV/KQl/O5lIJRhY1o8sQmMdikkRDMvZjSaHhDU2+1+ViKWu+VBpuRi41ioB5vwCEf8LNy3aT7zqoMWcsHpd5mk8Qee+7t49tuucnH1VKq7tr54y8i4LnQMog+xm23zPQPFo0tFZrAZxadUvnxN+VpbhGlwQSF34JGwsqGHjkTYGD6v/27jw3zDV5x6Xf9sZQfM0ZL8fUzN0yoCWdXUvbUidD8NXnKQT6+9uc/8RMJCKzFSpqm3xOYW/HBmOjmBaZdzGWY15jNHNZv7hPIgziV5mM5srhXa5DB0lYPMCMivnQUTKiMau6L+hIG6hjnpowqmT1r5cpvrVzp0NCRo2w5r02UgUrrLuc33yvPm9UBxI99jUYCmCFTPn6wL1Qa29A5TAVgllaW07pFeuj5swCXB50Fv1Y2PJSYLe978KmNzJ+UFfto8DmWhoCedanZfM8+0zlLkoXhmD8t2DpAGjlrSBgp0fhyfSYSEJgxaOvpqqGlDv/SiXsn3ZYnVr/JV+41PtEHKs3HNNRyrzyfZiKstzUGPxttAXUwyR+Y5r4s7Yyqw8B9McqsJl/5DfV02dLFvozo0NCRo2ynTO18b65RTd2lc0kd4DfxOmD30yj6LVm+ur0jRF9FVlhjpxFffozfaZRb+NKbru2d9dGWdFRSNpil6JmXOxZzDiM9evg0cnHo9fLg00CFEwqAtNDwpfE/xuG3NFY0Lvga64nlIf+LLo0vKE0+lqOR95onzXZfVq5p6iTHVlJ3a60DWTJy+EA3ZuthGgHmBYXcDAUtNqaSsqFx7+nYdevW+XjzUVv6OI75AUeM2MzHIZy7GvGDe+66xfegpx2T/PqrPLE8TCN+kCYfy9HIe82TZrsvK9c0dTLtcUatdSAPNAIUvZZqR4BZQ88W/yJgSvrABzf4NpgcYGbCWsx+cWzUicm1r/4rI6Ov3muRyrDZsBGgBFD0WppFAIHJANdfe7X3dcXBZ5K1/5fGkyUJzTCRIG/66r0WqQybDQmg6PU0kwAa5ucwms3kI4SQD1CIXDA/hwUhRPMiARRCCFFIJIBCCCEKiQRQCCFEIZEACiGEKCT93li5pr0jRF+F6D3sMHq4e+W1Vvfuu+9FW4QQomeGDm5xW44c5Pq1dxBtE0IIIQqDTKBCCCEKiQRQCCFEIZEACiGEKCQSQCGEEIVEAiiEEKKQSACFEEIUEgmgEEKIQiIBFEIIUUgkgEIIIQqIc/8fG1HbCQFHpcUAAAAASUVORK5CYII=" style="cursor: move;" width="320" /></a></div><br /><br /><div style="border-image: none;">A quick glance at this table shows that there seems to be a pattern of buying milk at the same time bread is bought, but let's figure out what the support and confidence for this is. The <strong>support </strong>for a pattern between milk and bread are the instances where they show up together in a transaction.</div><div style="border-image: none;"><img alt="" class="mw-mmv-dialog-is-open" height="233" src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Venn0001.svg/384px-Venn0001.svg.png" width="320" /></div><br /><br /><div style="border-image: none;">Basically this is the equivalent of an AND logic statement. So, for our example above the support for Milk <em>U </em>Bread would be 3. In larger datasets, it makes more sense to divide this number by the total number of transactions so we can get a feeling for what percentage of transactions have these items together. If we did that the support would be 60%. Support does not have to be just for 2 items, we can figure out support for 1 item alone (support for Salsa = 1, or 20%) or for several items together (support for Flour, Milk and Bread = 1,or 20%)</div><div style="border-image: none;"><br /></div><br /><br /><div style="border-image: none;"><strong>Confidence</strong> is defined as P(Y|X). If it's been a couple years since you were in a statistics class, that probably when over you head, but that's OK...don't be afraid. In math speak P(Y|X) means "probability of Y given X". In English it means, if you already know that you have something, say milk = X, in your transaction, then what's the probability that you also have something else, say bread = Y? So if you have milk...</div><div style="border-image: none;"><img height="222" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAdsAAAFLCAYAAACeHCZ3AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAADrdSURBVHhe7d0LfFXVnS/wlff7AeEVhATkIRREDVXRCi3coq0yU3CseGux3mv1M9Xb0XZqO/2MdTpeW0en1XrvaOcztjNtrR1lOupt0VZlZISOxbZEq1AoBCG8wivkfZKT592/tfc6Z52Tk5CE7LPX3vv3/Xxi9jmEh7Czf+vxX2tlPP8fbw98d+NW0RaJCiIiIho/61ZdLL5862qR1ZB9wdc7oz3O20RERDRe9hw4LgM3Y9mGRwbwRklhnphXPVX+IBEREY1d/bFG0djSIa+ff/SOeNjWLKwST3x1vfwBIiIiGrsHn/qFeGnbTnmNsM2UV0REROQahi0REZHLGLZEREQuY9gSERG5jGFLRETkMoYtERGRyxi2RERELmPYEhERuYxhS0RE5DKGLRERkcsYtkRERC5j2BIREbmMYUtEROQyhi0REZHLGLZEREQuY9gSERG5jGFLRETkMoYtERGRyxi2RERELmPYEhERuYxhS0RE5DKGLRERkcsYtkRERC5j2BIREbmMYUtEROQyhi0REZHLGLZEREQuY9gSERG5jGFLRETkMoYtERGRyxi2RERELmPYEhERuYxhS0RE5DKGLRERkcsYtkRERC5j2BIREbmMYUtEROQyhi0REZHLGLZEREQuY9gSERG5jGFLRETkMoYtERGRyxi2RERELmPYEhERuYxhS0RE5DKGLRERkcsYtkRERC5j2JLrdtYdE7W7Dw/50d3T63wlEVEwZSzb8MgALmoWVoknvrpevkk0lMaWDlF/7IxoON0iP+oOnRJtkaj8sfZIl9hbf1Jen6vKSWWicnKZ80qISxbMELk52WLxnOmiorxIVFdOdH6EiMg8Dz71C/HStp3y+vlH72DY0mAySE+1in2HTsgg3bW/wQrSqOyhmgSBW1FeLOZXTRFFhbmiZkGVKCnME/OqpzhfQUTkDYYtDVLfcEYG6dt77GFdhO2oZWSK3Pwi50VcTkGxczW8/t5u0dfT7byyDfT3iZ5oxHk1cugB1yycafWGZ8qeMK6JiNKJYUsyXGt3H7LC9Yj8jKHhkcgtKBGZmVkiK69AZGXniaycPJFjXWdY77mt1wrd/v5+0d3ZaiVzvwxhhHNfrz2EfTa4vzEUrcIXgUxE5BaGbQghTDdv3y127T9+9nBFD9XqjeZYvdR0B+pY6UHc39cruiNtVhB3OT+a2uK502XwLq+Zy55vAGHaA/UDqrZA6enpEzv3Nziv4vbV21MmuoqyIlE9vcJ5ZcvNyRKL5lQ6r2xyOqOs2LqnKtmIoxiGbUggULfu2Cc2v/VHGbBDyczKkcO/GO5FwObkj2zY13T9fT0ydBHE3Z1tww5H46G6Yuk88dHLFzB4fUTVFtQ3NMr73ZTaAjxLUTswt2qyHdiVFVZoI5AHT7NQcDFsAwwPmjesgN1WWyc/p4Leam6hFax5RdbnEqvnmu/8SLBh/leGbmf7sOGLKugPW8F77VWLWGhlEAQr6glQV7Dv0Mlxq3oHjHIM1yMdrrE6Wvi95lVNkb1jNOxwv1EwMWwDBmtUEaz/YfVghwpYzLXmF08QuUWlMmzJDt9oR4uIRlpEV3sz3nB+JA4PwtXLFohrly/mUqM008N1tEV7es9SLiHTAm28Ri72WWGvhp0bW9plHcShhibZw0bPeqRrx/FnQwCjngB/bt5nwcGwDQh8Q298tVYGbKpvbBWwecXlcqiYhmEFLQJ3uODFQ3D9NUvFdVbwcl5u/I22aC9VoM6vniKKrfdMgO/JnXUNow5i/D/h/02G74Iqjq74GMPW517bvluGbKp5qZy8QpFfMlEGLHuwY3SW4MXDcN2qi8WNVvByDu7cIIRwP2/evkdeDwV/54tk7y8YS7nQK67dE29YJBdm6dCI4LSGPzFsfQhzsS9te08890rtoOE0zLkWlk1iwLpBD962wWGAXu76q5fyITgKuH9fs8L1ZeshNFTAohGD59GiOdNC0bvTwxeN6KF69Qjea5cvEquXLeRwsw8wbH0ED6aNVsAiaJNbv/lF5aKgfKpcpkPuQ3VzpPmk6Gw5Jfr7+5x3bfjeWX91jaxopsFwH6My/uVf7UpZ2IRhefTeLr9wlpy/DHuQqCH1t947OGQdBv6OPrpsgWzwIYTJPAxbH1DzsRhiS5CRKQpKK0TRhKnsxXrF6u12tjaKjqYTgzbUwENvw5rLOK9rwbwkHjRDLT1TAbu8Zo78zHnw1EaywgBz1Rhmvm75hcbMWRPD1mjoATz+zJZB31QocCosnyIKyyYbvblE2EQ7mkXE6ul2R1qdd2wI3XtuXhnKni6GQDe+skO88Po7Keci7YCdK6u8GbCjc7bgxd8nGnpo8LG36z2GrYHwTfT0prfEs9ZDSq9URMFTQdkU2Zslc2G3qo6m46KzrSmhoArfU5/75HI5NBp0mHd87tUdsYeLTgUsPrPnNT7OFrz4u77x6qXWPchNWrzCsDUMegDff+HNhKIIhGzxpJmcj/UZOa/bdEJ0NJ9w3rGhoOVuq6cbxOplzMU+92rtoKFi/L+iYvv6VRczYF2GZwce6hhRSC6uwhAzQhc9Xkovhq0hsFD/8Z+8nlAwguHi4omVVm92svMO+RF6uq2nDicML2OID8N7t6y53PfDp2o+9ulNvxlUHc+Hu7dk6L66Y1AhGhs/6cew9VjKedmMTFFUPkUUTZjGOdkAQdgidPVDEfDQQy8XvV2/QchiquPHm94aNB/LYUuzoDGP0E0eYsaaZYRuEBp9pmPYegQPKgwXJ8/LYglP8eQZrC4OsEjzCdFxpiFhyRDmce+7/eO+WeaC4eLvWI1EvSfLghzz4d8LIxA49UtvIPm50ecXyWGbNeOi1V/Hi8rJZRz6cQnWzd31zWfF1to60ddvF9BgXrZs2vmyN5uZyRZmkOEkJVSSi4EB0dNlz6mdPNMmfvbGe6IwP8foAioUPt3/3U2ygA9FOYDe0QarZ/TQ5z8hVl4633odjsMs/Aj/Nh+6eI7c9ayirFCejNTd0yc6oz1iy2/3yrW85583SUyZWOL8DBoveN7j0AzAVq8MW5f9yHpI3f/kJnHGKVzA4eslk2aK0inVIisnV75HwZeBc4ILS0VByUTR2xMVffiwGl7brYcdHoCXLqq2gtec+wGFNv/w7BvioX9+JaE3i4f2N6yQvcp6gHMY0j/wb4VGHf797DN97e1e0ej7udXow97NS+afZ9Q96HcM2zTBw+ruR/5NbkunerN42E44b748JIDCKTMr2wrcCpGVlSOinW3WOwPiyIlm64H3rphYViQLjLyEKY5/+dmvxQP/+JL4/d6jzrv2NNPDd68Vf/qRJXwg+xhCd9mS2XL4+MjJZnnvwftHToufbn5bPqsQyllZmfJ9GjuGbRpgnP7eR5+P3cgogCqumC57syyAIsBB/QUlE0RPtFP093bLoT18cx4/3SqDzYteI3Ys++K3/l381zvvyz8PYC72a7d/XNx54wpRUc6DF4KivKRAfOzKD8i9p+sOn5SdAwQtjjRET3fyxGIxZwZXRZwLhq2LMKeFIWPMb6mHFebrJp43V+QVlcvXRIrs5ZZOcuZy2+V7+ObEaMj5MyaJGVMnyPfchvv2wad+KQv4MJcHmJe9bd2HxNf//Drrz8KHblDhuW/P5xaJPQdOyH9/NZ+L6Q1OF4wdw9YlKLW/66Fnxe4Dx513hCx+Kps2Wz5UiYaC6YU866O7s0MM9PfKh90rb/5Bzq1hLtdN2989IP7i4Y2xOTxQ87JXLJnN4cSQWDh7mpwiyLCud1uhi14uRubS3fALEoatC5575XeyR6t6BTj2bsL0OXavhWgEsrJz5bac2IWqN9op38OcKY5dwxrW8e5dYG4WBVDffnpz7L7FMqRvffF6Gbaclw0f3GNo3KE3W7vnsGhp74w1/DC9gbleNr5GLjls+Td3jh75wWty/aGCB+akqoVy+JhoNDCfXzplltVQmyer1gHbIH72b388aKemc4FTpW6574fiWauRqKCh/b2/+XQo9nGm4eH84B89+Bm58YWCOhTcM7h3aGwYtmOEea67HnpO7m2slE6ukg9LFEQRjRWGlSdaDTaMkADWaX/GetBhquJcfe+F/xK3P/BM7OB2zNU99qUb5AYb3MaPFPRyP3fjCnlv4B4B3DO4d767cat8TaPDVBgD9DLQ21Cbr6MXgt4I9zSm8YIdxSpmLogtE8PuP1/41k9jO9KMlnpQoghKwbDgD60eDD4TpYJ749mHb0vYaQp7B+gNNhoZztmOEnoX/+uhZ8XJJrt6FA/FCdPnihye0EPjDBthYHkQtnnsjUZk0QrmgToi3aMKSBRBfdEKarUUDZXGX771avH5//4Rzs3SWaGXi53CsAwMS4Ow0gKbYWAud17VFBZPDYFztucAvQr0LtQeo5iXrahaKLLzCuVronFnBS6mJ0omzXDeEHKu9a8ef1EWOZ0Nivf0exZzsk/9zafZsKZRwz3zr1YvV83rq9EW3GN0dgzbEcI8BTaWVg+4/JKJYuJ587hJBaVFYflUUV45J1YPgNNcMJSn9itOJbl4D1XGT91/s28OPyDzYP4W99BN13zQeUfIewz3Gg2PYTsCKCrBPIVSNLFSlE2dzUIoSitsjFIx44LYCVE4sxRru5MDN1Xx3pdvXS0/iMYDTgxCUZ1akoZ7DffccI2/sGNanAWCVi8qQbVx8UQujyBvYMpi4swL5KlRkBy4ycV7mJ9FRSl6tUTjCcPKuLdwj4Eby9SChGE7jFRBi3W0RF7KzMqRB1okBy72NMYSIVUlioKWJ756E6uNyTU1C2fKGgA1NYF7b7yWqQUNw3YIDFoyGWoFkgP33seeTyiEwsYE2KCAyE0IWmyIggM04FyXqQUVwzYFzM8yaMl0CNzy6XNFZqY9bzYwMCA/r7z0AqtHu56bVFDa4F577Et/FqtyRyEpCkoZuHEM2yS4OfQdUhi0ZLJoR6vo709cAnT0ZJNcC0mUTiiWQtHUPTevdN4RDFwNw1aDmwI3h8KgJZN1tjaK1pMHnVd2TxeGqlImSof113xQhq7CwLUxbB04OJtBS36RHLRYjoYNVvRlQV8Z4cYXROMNw8kM3EQMW8s+68GEw7MVBi2ZLFXQYjma3Dp0xrxY4GIpxnc3bpPXROmGwL2HQ8oxoQ9bDLXpPQA8uBi0ZKruSGvKoFVk4J43N7bhCrZ2xKgNkRcwpHzbuiudV+EO3NCHLYJWLcLG0WbcsIJMhaBtatjvvBoctAqO5iufFl9bi1EbjN4QeeGz6z7EwLWEOmxRdax22kGPoHza+fKayDSxoB3ol6+HCloFWzsWTZgmrzFqg0YlC6bIK6kCd+uOfc6rcAht2OIfOrbfcUamHHrjoQJkor6eLtFy/P0RB61SXHGeHK0BjN4gcIm8khy4X3tyU6jOxA1l2OIfGP/QStmUajn0RmQcK2Cbju2XZ9pCUfnUUU11YLQmsWAqvoacKN0QuOrEIIy4fOHvfxqaEZfQha0siPrOC/GCKOvhhePyiEzUcvyA7NlCbkGJ1VsdXU2B3GWq8vxYwRRGc8I2fEdmwYlBamtHjLh87cmfy+ugC13YYq5ADV3Ih5d2KDeRSTqajouujmZ5jcMHylD0NIZjHXFSEEZvlLAN35F5Hr57rTwoA7a/e0DuRR90oQpbnLmIQ7dBFkThMG4iA3V3tov2xqP2Cytg0TtF4I4VRm8wigMY1UGjk8gr2Ev5gTvXxM7DxV70QR9xCU3YNrZ0iO88s8V5JaxewiwWRJGR+nqjoqWhznklROmkGSInv9h5NXYYxVG/zs66Y/ECQSIP4GSqL9+62nmVOOoYRKEJW/xDxuZpJ0wbl4cX0bgb6LeC9v1YQVRB2WT5MV7KplbHhqLRm+BwMnkJu0ypgikczafX0wRNKMIWC6gxLwCoOi6eWCmviUzTcrJe9EQj8hoNQvRqx5N+/3M4mUzwuRuXxwqm0Pi7X1spEiSBD1sMHz/+zOvOq8SWPZFJsOdxV5vd07QLoma5cq/qIzscTiavYd4W87cVZUXyNepqUF8TNIFPncef2SKHJwAFIhw+JhP19/WI9tOHnVeoKYivj3UDh5PJJAjav7t7baxgCvU1ahvdoAh02GIDdrUJOx5co12jSJQuLScOxjeusHqeuQXuNgoxnFw0IV6d/MgPXpPXRF5BwdSGNZfJa3uKI34SWxAENmyxeQV6tUrpVHeG5IjOFYaOsfcxpLOmoBjDyXmF8hq7SwVx6I785ZY1l4v51VPkNe7JIB1YENj0+c4zr8v5WkA1p9s9BaKxwPBx2+kjziucpZzGmgLr99F/vyAO3ZG/YBj53s/ElwOh3kY9x/0ukGGL48RUiwjDx+Nd0Uk0XhC0CFzwolGI3aX04WR9NIjICxhORg8XUG8TlIr5QIbt919807kSomSyFbQcPiYDRTuaY9XHXjYKMZysdqdCJSjPviWv4XSg6kp7z3os2wzCcHLgUggPCrUlI+ajcK4nkWkG+vtE60n7LGXwtKbA+n1V7xb0xiqRFzCcfN/tH3deBWM4OXBh++D34kMOOPeTyERtpw/Hh49LKzyvKSjEsjinWIq9WzIBhpP13aX8PpwcqLDFRtZ7nYcEe7Vkqt5oRG5gARi+LZk0U157TW+csndLJsDuUvrpQH4+rCBQYas/INirJVO1qdN8LKVTqoQpB2KgccreLZkkeTjZz43AwIQte7XkByiKUmtqTbxP2bsl09QsnCk+vHSevMYz3q+928CELXu15AcdZxqcKzPv0+Tebe3u+BaSRF65be2VzpV/G4GBCFu9V5tvPSzYqyUToVcbO9HH4NEXvRHw+E/ih3gQeWVe9RTf924DEbZPbtzqXLFXS+YyvVer6L1bPNjU8ZREXkru3frt3Fvfhy2GudSJJbmFpXJHHCLT6L1aP4y+6I2BF7f83rki8k5y79Zve3n7Pmz1B0Fh2WTnisgsfunVKmgM4FAE+LXVsw3K/rTkb3rv9ulNv/FV79bXYYsHgNotCusV86yeLZFpIs0nEnq1fhl9KSybJD/jgRak01fIv/TeLZ7/furd+jpsN2/fHWvZFODBwD2QyTQD/aKj6YTzwl81BfklE2PfUy++zqFkMsO9t66OHTLvp96tr9PpBe0BwCFkMlE00hrbltFPvVrAaFG+M1qEo/dYKEUmqCgrEtevulhe66ObpvNt2OqFUXiIqVNLiEwSaTnlXAlRUB7f7N8vcOyfwkIpMsW1Vy1yrnBfvutcmc23Yat/4+eXVjhXRObo643GdotCsZHXhw2MBSr8cfwfsFCKTIG5WxxUALW7D8U6XibzZdjqQwd4EHATCzJRZ8tp5ypebORHsh7CwkIpMsnalRc5V/6oKfBl2A4qjCIykDrZB0VGstjIp3AEoMJCKTLF6mULREmhPerympYJpvJl2L78q132hfUQ0x8ERKbAJhax82pLJvi6pkAWSjmjRyiU4n7JZAJUJH902UJ5jdHO17bvkdem8l3Y4ps9tg9yYSkLo8hIXapXayko9f/oS17xBOdKiNo9h5wrIm+t04eSDS/g813Y6q3qHB8WnFDwoUfbZfVsAXsM5+T7/z7NK7IP8Ia33jvoXBF5C4VS860P2Fl3zOgzmH0Xtm/viYdtbkGJc0VkDn25j5/nanU44F41GvBQa49E5TWR1/RCqdgUo4F827PF8DEPHSATxaqQMzIDtdlKbmG8cYvlFkQmuG754tiOUi9te8/YQilfhS2GCDBnC3lF3AeZzNMbjSQURqntDoMgtyD+Pff2niPOFZG3ELSoTIa2SFTsrIsf+mESXz0J9MKMnHwOIZN5sD2joodTEOTmF8YaD37ZIo/CYUXNXOfK3AI+X4Wt3prOLWRxFJlH7RgFelFRIGC9sLZXshplIvJazcIq58rcAj7fhC3G4bFdHKDCU20hR2SMgX7R3WVvZ4h7FEVFQaOvAOB6WzJFcWFebPtGUwv4fBO2GIdXE9+sQiYTdXdFZOAC9hQOIv17T18ZQOS1yy+c5VyZWcDnm7BNmK/l+loyULezthZyCwM2hOzACgA1qrSV87ZkkJoF8aFkEwv4fBO2u/bHK8zyAtprIH/r7myzLzIy7WKigMopKJKfUfnJU4DIFIvnVsaWAJlYwOebsFU7g8jtGQO0nIKCYaC/T/REI/I6N98KowDfo9m5Bc5V/PuSyGsI2iuWzJbXJhbw+eKJgLla1YLOzot/oxOZItoR/8YOXBVykuzcfOdKiPqG+B7QRF67ZMEM58q8Aj5fhG39sfjBwDlaq5rIFN2d+vraYBfwJYZtk3NF5L3EeVuG7agd1FrPWTlc8kPm6e60R14yM7MCv41oVo4Vts4w+b5DHEYmc+BggspJ9sgSe7ZjUHcovrE7h5HJOAP9oq+nS15mY742BHKc3m39MQ4jk1nUKUCYszVpva0/hpEb4sPIenEGkQl6u+2ghbBMc2Q5YcuKZDJNdWX8pK2DBjUG/TGM7PyFoRI5iLvykL/pYRuWaQ5WJJOp5lbFT9oyaZrD+LBFJbLq2XIImUzU6wwhQ1juUVYkk6lmVVY4V0IcMqiAz/iwZSUyma6vJz4vFJZpDlYkk6mqp2vDyAY1BI0PW1Yik+n6nGHkME1zsCKZTIXNLdS8rUlTHMaHrV5NlsWeLRmoxwnbsE1zZGE3N0tjMwukyCyzpttDySjeM6Ui2fiwZaUjmUwu+XFO+gnbNEdmth226jQuIlOYWJFsfNjqsnLsb24Krv+xeq745PL4UVmQ6j1ThLESWcnIyJCf2SAm05hYkWx82B4/Hd8Gj7zzsaXnydDD59HAz8FHsgc2XCLuXLPAeWXD133iiipxdY19CDSkes8ketiGbRhZ9WyJTGNiRbK/erbOHBGl323XzJOh9+fXXSCDciR+8JdXyZ+Dj7++aYnzrh20S2ZPMDZAR0PtHAVhCx/VswXTTlihcKucHD8MpC0S/x71kr/mbHm0nmdysu2/+57efrFw5tlPtfnL6xeJ8qJc+fWQlxOv0n3vYJM41dIl/ngkWA/osDUGMzPts0OJTFNcGJ/SaTBkdNT49Oru6ZOfscE7ee/I6Q4ZvAjT4SxwAhlfn+zfth0Utz/+pvjKP+9w3vGvvt4e58oStsZgZvz/lxXJo4PGZnNHt/OK3FCiBa4JjH86tDtDABlsRRvh3QP2/MeFsyfIz6lgXndyWb442hgRbZ2pK1XHMv+bDEVTnhdODQzIT2FsDGZojQvVKHZLpKtXBpT+4WdobL7x7nHnFbmhuNDeeKWxuV1+9prxYdvWYa+Rysxiz9YE//JanXzwYYh4qKC7Zqk9F7urvll+TqaKnv7sqmrnndF77I5Lxc0rzxfXXRY/LNoLA/12yISxMYhNPBTVKHbLYy/+QQaU/rH2gdfFP770x8BNR9D4yHWmrtxuCI6U8WGr1vDxAAJz7Dpkh+iVC+Pl9boZk4pkID+5aY/zzvjCEPbsaSVyGO7Wb//Kedcb/X3ON7JWLBQWeoEUTv9x02/3npafUVynPnAf/HLHUTkdoUZciJSK8mL52e2G4EgZH7Zcw2eebzz7rvyMUE2GByDmdOsa2px3xhd+/eWLpxoRtDDgbGiRFcJlMJnZuc5V+qCKXX3gPnj4fy6V77/5B24ZSam53RAcKePDlsx04HhbykKppXPt9W1uPPywLtekoIX+PqdAKoQ9WxNcMMMuxEMPN5k+r4t7Bq9TFSUlzwefjf61+Llno389pU/lpFLnyoxdzowP28pJZ19mQum375jdc9ULpTAXW5ifLYM41cPvXBTlZYuVS6bJa1OCVhfGnq1+2pHXLp0/ybmyIdgwr4vhZdyPuGfwOvneebX2mPjUI1sT5oIxD5wqlLftPCHnifWvxc9VQ9zJ8Gs8s+X9hK8f6tcmd5kwQuqbnm1fD29Qk2A+NrlQCkN74Mb8WUe0N/aQQnGUCfp6zQkbr3nVKEYAwlAbpGBN9xf+6bdyVETN8yr/79eH5H2MH3vq7ivlB74GDcUnfp5Yb4B7eqv1e+Hnq6/FfYiQx7QKAj3ZS785Ipe54dd/4q5lsfs2+dcmd0zTerYmMD5sK8oHzwuSGdS8LAqlsIwHRUsIYFQsuwE9A2ySgd8nefjaa17MX3otNoSeRgg9fKA3iV7jt5/fJe+F5J6tgrDDjyOM1TwvIBxxnyII8WNYqoYPfA0CF7++XuWM97+w9gPy56uvxX34qY/Mlj+uRnoU/Pr4vTHag1//vIpC+fXYgW3WVLtwh9Kn4ZT3G1sYH7Y4mxDYizCPmpdFodTKi+whXlWp7JaXf3tEfl62YLKxhxOEhSoOg3Q1iu9/+m35gd4kwgzQ4xxqNAWNQBWwOhWOl8yJnw6jVE6097jeczhxSRGmSJIVF9jTB53RxDlB9eur0R4dlqxR+BgftqbtAkJxGG7DxhUolFKFKqpS2S3ojeDBit/T6zW2uv4+7wsw0k3/f1ZrGt324v2rEj7U0CwCONV6WxWcyVSxEgJV9ZbVR3LI6jByg3v+bF+rwleFMXnLhBFS48NW3+PSi2ErGt772hKfVPNWbsCDFQ88zBdjLswrCXsDa7280ND+n0uc3XrSTR/K3fL7ke/IpHrFh051yPtW/zjTFpXDv/i1dRhaRkHUXU9sj/WwMYxNZuqIxOt80tUYHI5vhpFBH7YiM+Bhow4bQEs/XfDAQy8Dc2Fezd/qG60k7JEcEgNqQw+L3ihOt6nldu91NBXwagoC86nqZKrkD30IGEGLURvM8f7kyytiPWsUSpGZ9NN+vGoM6owP24llhc5V4jc3pReWPOAjlU9+8z/lj6UqjELrHz+Gzwq+Du+h4EmX6v2hvhbQy8CPedq7UPsDO3skh0m/s1Wl3iD2AirVYagiqVRQ4ATdTkPxbHbsa5Sfr/rAlJRzt8kK8uyvae8c3Ajj0p/0aNc2s/CyMagYH7a6/n72bMks6lg9tUdymAw4DYyKMm/nw7BWFlakKIQayrzp9hDxf57jYQBDzdmqX//N3afkZwVBy6U/6aF2jvK6MagYH7YJ6/c4jEyGiu2RHCL9HgydY22s/qGqkjEsnKrqeCiYj8XPwc/FRhOYAkHRFIqssHYXv66+49PSefbOaC9YvyfqBfCB3x+V0Kng10clNH59LFHCr4+haAQtl/6kh9o1yuvGoGJ82OrVyL3dnc4VkRmycuz1tWGsJ+hzCharpw9ePjPe1BAxphX0DwQX1sWOZTkNfs5f37RENLZF5TQHpipwqAGmJfD7qaFmwGvUBiA8US+Aj4YzneKuP1ngfMVgN31ktiy0ws/Br48AR+8bv+9ohrxpbEw7Yzlj2YZH5FhQzcIq8cRX18s3TVLfcEbc9JXvy+uCssmidHKVvCYyQdPRvaK7067CnjrX3hQ/DPp6usTpenuu/KZrPijuvnmlvPYr1YvFkjJUuQ8HXzuSr1NQyId5ZT28yX3Xf/GfRMPpFs+y7cGnfiFe2rZTXj//6B3m92yrKyfGxtx7o+zZklky9T2RQ9S77e2OD7FWVQ7euMFv1K5QIwnQkX6dgoIqBm36IWhN4osCqVnOMFUfh5HJMPqZrmpYNQz0sJ1XNcW5IjIP52xHobrSLk7AUgNubEEm0Te28KJgyCt6/cSs6fb3J5EpMP2omLDsB3wRtvowlUnHehFl5egFfOG5N/ucni16DaY8zIiU+mP2umioNmSawyc923i1I+dtySTZefG9d1E0FBY9TthWTuZ502Seg1rPVo2Mes0fc7baXxZ7tmSS7FwtbLV5zCCTjQqnGIzztWSiukPxzUTmVZtxj/qjZ6ut4+thkRQZBPsjZzq7SKneXtDpxVGmDNER6eob7GFk7NPAAqlRwNIfNZTMYWQyjRpK1nt8QZYYtiyOIvMcPGYPI1cbVLzni7AFVfGIauQw7kNL5srRh5J7g7/JfG9XfGceU4boiJTGlo7YVo0mTXP4Jmw/MKfSuRIi2mHWYmUKt8SK5OAPJasds7BvuSlDdETKvvqTzpUQ0yaVOlfe803YXnGhfUA0dHe2OldE3gtT2PZ0tceO1qtZOFN+JjKJmq+FWdpKFq/5JmwxXKUOJeiOtMvPRCbQl/8E/bCMaCTe0F1RM9e5IjJHfUOTc8U52zFbsXSe/NzXGxW90Yi8JvIaqpEzM7PkddCX//R0xhu62OCdyDT7DtnDyHphrQl8FbaXLIgPW6l5IyITZOcVys89ViMwqAV8+P/qdoqjELTcOYpM0x6Jir3OnO3iudPlZ1P4KmyXLYnP27JIikySWxgvxOhqb3augkU2cJ2lTZcsmCE/E5mkdvehWCWyafeor8IWlY/znaUGsoUdwgO7yUx5Wtj2dAVz1EWfr12mFSwSmWL7ewedK/PuUV+FLdQscOaJrKDt1uaPiLyEYWQ1bxvtCGa1fLfz/4VCRdOG6Ihg+7sH5GcT71Hfhe1yrQKyW2tpE3ktr7hcfsbGK0Er4ENRIj6AhVFkIhwWrw6MN/Ee9V3YLp5bKavMoIvztmSQnPwS5ypxyDUIoto8tN7gJTLF1h37nCsz71HfhS2C9gqnUAp70XIomUyRVxSftw3aqEtXW/zIMr1QkcgUb+854lyhZ2vehiu+C1u49qpFzpUQna3xo5SIvIT1tjnOEqAgFfBhSBxLmuDDS+dxi0YyDiqQUYkMWFuLrURN48uwxeYW6hseyywwR0ZkgtgSICtogzKUHGk97VwlNnSJTLGzrkG0ReyagiuWnC8/m8aXYQtrV11kX1gPNX2Ii8hLuYXxFnUgwtb6/upstfeaRQNX7eJGZJLaPXavFkxdA+7bsF236mLnymp5t8Rb3kReys0vxIny8loWFfl8KFkGrfP/cN3yxfIzkWle3rZLfkZNj6k1Bb4NW7Sy1V8qCqVwGgmR56ygLSiZIC8xveH33aQ6tSHk2GgSkUGwtlYt+Vm9bEFstYppfBu2sG5l/JtffygQeamgdJJzZd2XbfHjvvxGL4zCukUTi06IXtzye+cKNQXmjr74OmzRs1WFUp1tTSKoG8CTv+TkF8erkiOtsc0g/EYvjFq7colzRWSOxpYO8Wtn1yhUIZt8xrKvwxbDBbF5JFnIwd4tmSG/JH60V6cfawqSCqOw5IfINC9t2xk7eGCd4dMcvg5b0OeRWJVMppBDyU6hlF5k5BeRllOxP/PqZQuNnQejcHvZClvA/flR6z41me/DFvNIqtWN+aVoRzCPNyN/ycjMSiiU8tUyICtkO5pOOC+4tpbMVLv7sKhvsDtY2FVQTSmayvdhC7etvdK5EqL15CHf9SIomPRCqS5nSNYP0KtVG8WgITvPOdaSyCR6YdRarVjWVIEIWzwMVO8WDwk5BEbkMb1QqqvDJzudab1aDM3de+tqeU1kEhRGveEcPKAvAzVZIMIW9N6tfFiwd0sG0AulItrQrKn0Xu31qy42fmiOwmnz9t2xwii/rP8OTNiyd0sm0gulOrQgMxH+bG2Nx+Q1erWfXnOZvCYyCUL26U2/kde4T/XdBE0WmLCFu29eGauaZO+WTIBCqaKyyfYL6340uXcr/2zO9wx7tWSqF15/Rw4jg5/u00CFLSqT8ZcP7N2SKQonTDW+d4s/E/5swF4tmao9Ek3o1frpPg1U2AL+8tm7JZPgnFvTe7fs1ZIffP+FN33Zq4XAhS3+8vXebXvTcXlN5KXiiukydMG03i22k1S9Wnz/3LYuXmxIZAqE7POvvyOv/Tj6EriwheTeLTZUJ/JURqYownAyGNa7bT1RH+vVbrC+d4oL8+Q1kUl+vOk3sQpkP46+BDJsE1rn1kOk9dRh+5rIQ4Vlk43r3WIrye7ONnmNjdz9UtlJ4eL3Xi0EMmzhljWXi8Vzp8trnHUbaTZvnoxCJql324bdzjwkp1lOxxui993+8diIEJFJ9F7tnTcu912vFgIbtqA/PLB+0K9HnVFw6L1b7Crl5RRH2+kjot85lhI9WtU4JTIJDoZXvVqErF9HXwIdthgWSxhO9rgnQZTQu7Ug8LyAE7LUKVlYMnfPzSvlNZFpHvnBa7FeLWoK/Dr6EuiwhZuuWSrmOxup4yBvdUYnkVfQu40dLt/ZJjqdSuB0GbB6s3rIf/nW1Rw+JiPhvNrt2uHwfq4pCHzY4iFy32fjw8ntjUeNWnZBIWT1bkunVMc2ukj3Pdl2+nDs97tu+WJfbOJO4YOiqO9u3Oq8QqPwal83CgMftoB9kzH8AHjIeDV0R6RkWz3bonJ7xAXzpukqlurubI+N7mD+656bV8lrItM8/syW2AYW6NHWLJwpr/0qFGELqE7GMARgrorDyeS14omVIisnX16jWErNoboFw8etJw46r+y9xLmmlky0dcc+8dr23fLabhT6v6YgNGErh5Nv/7jzSojWU4e42QV5KyNTlE2tdl441cEuDic3H38/VpG/etlC+UFkGux/jKIoJShL0kITtoClDZ+7cYX9YqBfNDe8L1v7RF7BAfNF5XZ1MoIW87duwK+LAkFA9TGKoohM9OTGrbHh4yDVFIQqbAHDyercW7Tymxv2y+Al8gr2Tc7Ktodz5Y5OTiiOl2hHs+hw9ghHD+Gxe2/g8DEZqXb3YXmEHmD4ONY5CoDQhS08cOea2Pwtll60OwdmE3kC1clTZzkvhFwPPl4jLn09XaL5uL10Av63du8TmQRraR986hfOK7umwI87RQ0llGGrWvclTuu+o/kEC6bIU7kFxaLAOYYPIy56IdNYIbCbjtbFRm4wqrPCGdUhMs39T26Su0UBho6DVlMQyrAFzFs9cOefOK/sginsoUzkldJJM2LDyahOPtf5W70gClMnQRqSo2D50aa3xBs79slr9Gb1YtagCG3YAlpPyQVT6dxcgChBRqaYcN7c2GYXmGfFfOtY6AVRGDYO4sOLggE7RKnNKzDq+Hd3rw3U8LES6rAFvWAKQYvAZcEUeQXrbsunxasvMd+KedfR0AuiMFXy8D3rWBBFRsKw8f1P/tx5JeR62qAeiBH6sAW9YApDyU2sUCYP5RWVi6IJ0+wX1n2IedeRFkyhN6sXRKFHy4IoMhEKor7w9z8VbRF7quOmaz4Y6POUGbYWDF2g9a8KpvDAYuCSl4orzhO5haXyOrZE7SyS71tMkbAgikyFgqj6BnvXtJqFVdb9ulxeBxXD1oHW/xNfvYmBS8Yon3Z+rGBKLlEbZk/v5PsVR0tiioTIRHpBlF2suiYQu0QNh2GrwYEFyYHbcrJeXhOlW0ZmVmLBVPOJlPsnpwraz677kLwmMk1yQdTDAS2ISsawTZIcuHi4tZ489zWPRGORXDCFxh/CVWHQkp/sqz+ZUBB13+0fk8/cMGDYppAcuNjwgoFLXhlUMGWFK0I2OWhRXMKgJVMhaO966NmEgqgwHYbBsB0CAvfRL90Qm0dg4JKXUDBVUFphv5CBWyeajsV3h8KG7TxcgEyVHLS4X7EdY5gwbIeB9V6YTxgUuM4DjiidSqfM0gJ3AP+Rl3hwcdMKMlWqoA3j/cqwPQvsMpUcuGeO7rPylkfzUfpl5xY4V7ac7Eyx/uqlzisiszBo4xi2I4DAfexL8YMLsPFF46Hdo97Zh2jMBvrlqAoOmNf19PbLhxkeakQmSQ5a7NQX5hEYhu0I1SycKX744Gdiu/Fgo4Ezh/ckVIYSuUGe3nOsLuFkKszPquISPMwYuGSS5KBFhwVracOMYTsKWHz9vb/5tLxxoF8+BPeJzpZT8jXReMPoCUZRsKkFYHQFlfKoPMayCXUv4qH2hW/9lIFLnksVtPpUXFgxbEcJG7pjSFnfwxPH8+GDaDxh1ASjJ+qYPDT2nrIaexhlAbUhgArcxpYO8dkHnhFbnZ15iNIN9x7uQQbtYAzbMcIwnr7UAr3bpqN7WThF40LeT8f2ydETwN6xP9KmMRQVuOrkKmzu/pXHX5Tb4RGl03Ov/E7ee7gHgUGbiGF7DtC7TdjesbNNDvnxEHoaKzTWUAilj5TgPnvsS3825DF5eJjhDFBsEqBgO7wHn/pF7MFH5BbcY7jXvvPMFucddc/G9ykghu05w5AehvYSCqeO/FEe3o0KUqKR6u5sF6frdyUUQuF8T4ygjOShhU0CUO2pvvalbTvFXQ89J9qdIT2i8YZ7C/cY7jUledSPbAzbcYCgReEU1pApOLwb63G5PIjOymqUoXHWdPSPor+vR76Fjdmfuv9msV7rrY4E7sEnvro+Ntqys+6YuOW+H7JwisYdjsfDvYV7DPTiPRqMYTtOMMSHXgXK2/X1uKcP7RaR5hPyNVEyNMZwj6BxpiAwn334NrmD2Vjg52GZ2nxng/eG0y2yOhSnrRCNB9xLt//tj+W9Behw6MV7NBjDdpxh7eO/Wg9KVSGKXgs2IpDFLk6vhQgQsAhaNfqBRhoaa2i0DTU/O1KoXEYvQxVOqaVBLJyic4V7CPeSqjhG8R5G9pKL9yhR1oyLVn8dF5WTyxKGQWnsCvNzxceu/IB8eNbuOSL6+vutB2pUdLWeFtm5+fKDwguNruaG/aLTuh+s1ph8D42z//OVG8fcm00Fc7cfXbZAdES6xc799lDf73bVi7feOygumj9DlJckbv1INBz0Yr/4rX8XL2vzsxgy/ubn/5SFUClsra0T+w7Z0zfrr1nKsHUTHpyrLrtA7Dl4Qpw80yYGBgZEV3uT6It2ipyCIpGZyRs0VAb65QHwrccPiF6nN4uH1B03XCW+cuvVspHmBgQ5eh0I2u6ePnkvvvLmH8TEsqLYUDPRcFAA9VePvyiOnGiWr3Hf/uUt/41HOg6DYZtm6D386YeXiB7rIff7vUfle3jQRlqsXo318M3JLxIZGRzND7poR7NoPrbf+twkG12AoENvVg31umnOjMlihfX7qIYfQhcPg137G8RVF89hz4RSQrXx/U9uEk9vekveM4BOxGP33hCfKqOUGLYeuXRRtVh24Wxx4GijfNhh+BAFVFjmkZmZKUOXgqc3GhEtJw7I+Vm14QmC7dZPLBNfu/3jsuo4XVTDD7//u1bDD9Mb6KlgWPD8GZPEjKkTnK8ksoug/uLhjWL3gXjx3i1rLhcP3PknnIIYAYath6ZMLJEPOwzp4WHXGe2xejn9IhppEV1tTXIuNyvn3ApjyAyYl21rPCpaT9aLvt5u51270vhbX7xe9iazsrwZ0cB8LRp/uAdb2jvlfYhh5TMtEfm+V38uMgM2qfiHZ98Q3356s7w3AM8s3LfMiJFj2BoAQ3o3rL5EPtQwjIcexkB/rxW4Z6zebofItXq5mVkc1vMlbV5W30kMFZvYug4FJW7NzY6GbPh9ZIno7x+ITW/ssXowW367V/Zw2csNp9rdh8WXHn1e/Nc7+5137AbiNz//Cd4To8SwNQSCFg9gPPBONbWL94+gMhXrLqMiglOErADOys1jEZWPYEqgxQpZfV4WS3AwXHznjStERblZUwW4B9GTXTxnuiyeQi8GPV30ctEIXDh7GocLQwKVxt946pfiyY1b5T0AWE3x8N3rxKeuvZRz+mPAsDUMejkrL52fNJ+LDTE6RKT5pAzfrOwc68P73hANhnlY/Du1HH9fdLWfic3L4kF127oPia//+XXi/BmT5XumQo8Fjb4zLR2xhwPmcn+6+W2BZUMoiOHDNphQAPX9F94UX//Hl2MNfkDR3qP33sBq9XPAsDVUqvlc6O3ulD2m7kibrFrOzmNPwwTYA7u98ZhoOXFQHoWHuXcFQ8Xf+PwnxBVLZvtm/hNhimrl5EYf1ue++Po71o9njesaYPLeC9a/61//35+J7e8dkFNZgHBFAdSGNZcbMd3hZ8lhm7FswyNyvAtDmthTlcyAdW0bX90h9ibtaZuVnScKyyeLgtJJIiMzy3mX0gWHBXQ2nxBdHfZ6QwU9WYTsjdY3VTorjN3y2vbd4vFntsgzchU0BO+5eRWXfPgc5mUf/8nrCc8W3LO3rbuS+xqPI5yEpA5oeP7ROxi2psM3BkL3jaQDwTOtoEXgFljBiwAmd2F0obPlpOiJRpx3bJiT3bDmMjkqFLShVlSlYmu+pzf9Rl4rCNs7P7lCzOMQo69gXhYNKP1Zgnv2JquBiJ7suW4RSokYtj6FbxQ89PCPl3xGaU5eocgvmSjyissZvOMIG1HIj/bm2CHuCr5f1l9dI4degw69W5yPqx4cCkJ3/dVL2dM13FANdszL4lhGNBhp/DFsfQ4FDc++8jvx4uu/TxjiUxi852a4gAX0YBEwYezV4Zi+7/xki/Xwjh9sDxheRs9o9bIFLKQyCB70qaaiMC9796dWWc98ntDjJoZtQKB3+9r2PeLlX+0a9PBTGLwjc7aARZhgQ3/MZwVhPvZcbbV6SM+9WjvovsPfzdpVF/HvyUNojD//+jti4ys7BjXGUeC24brLQjEaYwKGbQDhmwoPwM1v/XHY4M0rniByC4pFTn6x8244YXcnVHd3d7YOGbAYWsMwG8IDYUuDoaf7nNVzSh5eRu8WvVz0dvl3lx7DTTPh2M8br65hNXmaMWwDDsG7eftu2eNNHj6KyciUu1TlWMGbW1AqAzjIsEynO9IueroQsB2x82OTqYC99qpFLP4ZBdxz6ElhKYk641TBc+Wjl18ge1Ps7Y4v9GIxD7v5rT1yH2MdquOvW36huPGaGs7JeoRhGyJo7aLHO2zwOnILSgITvtj8H5uCdHe1yx7scIf2IwAQBAhYtvzPDR7+L217Tzz3Sq2895IxeM+dCthttXWDCp4AwbreClgELauLvcWwDan6hjNiZ90x8faew7I6MdXDUIfwxfKirLwCOd+LAxJyrGuT1vZizSv2lMZynP7ebqvH2i2DNtWwsIIWP+71SxbMEDULqtiDdQnW6W58tVbec6kweEfubAEL+Ptcu3KJHDImMzBsScLQH+Z3395zROzaf+ysPV9dqiBWxqtXbIemvavNaANVx3D11khGV1TwYgkRhzxtIwlYVBWvXXkRGyyGYthSSvjmRvhiA/qd+LB6JMmFFmOF4qzhesTdnfbWgOMBD2s8hBiu5kHwygp66wGEkZZU8O+HJSmXLJgpP4clfEfa+MW9jSkPBCwbJmZj2NKIIWx31jVYD4J2+XA81NAkHwr76k8MKoRJJ1S4VpQXi8VzKkWR1XPFqTXowTJY/QP3E4aaN1vhO1TwggrfRda/NZ5RQaluHs20Dv6fr12+WFZ4M2D9g2FL4wYPifZIl9jrbLat1B06NSiMG5vtwNZhiUiqoqT5VVOsEI1vgi6DtGqqqJxcyodNAOG+wFDpW+8dPOuICoZL8ayqqpwgG1koAjK9sE3/PkGDFT1YNFqHgu8L1btHdTyXT/kTw5aIjIbArbV6fH/Y3yCDaSSjKAjh6ukVMpgmlhWmPYj1QD1+utXqqbaOeAQIjclF1p9TDp1bH6yKDwaGLRH5CjbPwFF/mM88W69wKBgRwTGiOoTc3KqRnTWMWobunsSivPpjjWP6s7BoLxwYtkTkaxh2bmzGQfd2zxEhLHuVo6ioTwe9tiAnJ0sGK2sLwoNhS0SBhZ5m/bEzViDbvU69RzqegZzcU0YvVdUWVJQXcZ6VGLZERDrMtw5n8dxKnmZEo5YctpnyiogopFD5O9wHg5bGA8OWiIjIZQxbIiIilzFsiYiIXMawJSIichnDloiIyGUMWyIiIpcxbImIiFzGsCUiInIZw5aIiMhlDFsiIiKXMWyJiIhcxrAlIiJyGcOWiIjIZQxbIiIilzFsiYiIXMawJSIichnDloiIyGUMWyIiIpcxbImIiFzGsCUiInIZw5aIiMhlDFsiIiKXMWyJiIhcxrAlIiJyGcOWiIjIZQxbIiIilzFsiYiIXMawJSIichnDloiIyGUMWyIiIpcxbImIiFzGsCUiInIZw5aIiMhlDFsiIiKXMWyJiIhcxrAlIiJyGcOWiIjIZQxbIiIilzFsiYiIXMawJSIichnDloiIyGUMWyIiIpcxbImIiFzGsCUiInIZw5aIiMhlDFsiIiKXMWyJiIhcxrAlIiJyGcOWiIjIZQxbIiIilzFsiYiIXJaxbMMjA7ionFQmrl2+SL5JREREY7ettk7srT8pr59/9I542BIREdH4Q9hmrl620HlJRERE4wmjxhVlRSJjwLLP6uq2RaLODxEREdF4qFk40/qvEP8f6XQ2yiwPu08AAAAASUVORK5CYII=" width="320" /></div><div style="border-image: none;">What percentage of the transactions that have milk have bread also?</div><div style="border-image: none;"><img height="222" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAWQAAAD3CAYAAAAqni55AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEpbSURBVHhe7b1Xc1tZlra5SJAg6Al6K1GUobxJbytdedP99cRE9A+YiZi+mf/wxfyAmZiOmIuJ6Mu5nIju6S7vKyu9UaZSXqIkit6DBAnC0M16FojKrKo0UqYIHByuR4GgSIAkeM7e715ur13x4//1/9pZXc9KdVVEKioqxHEcx9l7Nje3RFRyv/fsSXn16WHpam2Sipf+p/9959jBTjl5qEdqY9W7L3Ucx3H2krsTC/LxzQl5/OQBefWpYTnc3y4V3/5f/s+dH75wWv7x5bMSb6rbfanjOI6zl/zxg1vy//z8fTl2oFMF+ZgcGeiQiu/8y7/u/NOr5+Wfv/u4tDa7IDuO4xSDX799Xf7t399Uy7jjL4Jcufuc4ziOU2JckB3HcQKCC7LjOE5AcEF2HMcJCC7IjuM4AcEF2XEcJyC4IDuO4wQEF2THcZyA4ILsOI4TEFyQHcdxAoILsuM4TkBwQXYcxwkILsiO4zgBwQXZcRwnILggO47jBAQXZMdxnIDgguw4jhMQXJAdx3ECgguy4zhOQHBBdhzHCQguyI7jOAHBBdlxHCcguCA7juMEBBdkx3GcgOCC7DiOExBckB3HcQKCC7LjOE5AcEF2HMcJCC7IjuM4AcEF2XEcJyC4IDuO4wQEF2THcZyA4IIcErZ3dmR7++8f+mXHccqEiu/8y7/u/NOr5+Wfv/u4tDbX7X7ZCQqbW9uS29jUx5Z95POt7W3Z3Nw2EUZxN/Rr6cyGbGxu7X5Xnmh1RGprqqWqKmKfV1ZUSHVVpVRWVkpVpFKfr5KaaJVE9fmIfu44TvH49dvX5d/+/U053N8hrz51TI4MdLggBwWs2awK7tp61h7rmZx9TK5nZDWV0f9nJLmWlUxOhVfFOZ3dUHHeMgsYkebriPRnQXxjUQQ5L7bVKry1NVH7GFMhbmyISVN9TBrqauxjY11MamPV0lBbI/X6NcQaEXcc59HjghwQtlR8EdNsblNFNi2zS6syPb8ii8spSayu69cQ4KysqggjvFi+vH4DwVUBxrKtrKwwEY9EKswK5mufB9+L9YxoR/R7EHCEf0f/U60WNOKMtVwXi5oo16sYN6lQtzTWSXu8XrrbmuzBc/wOXouF7TrtON8MF+QSgfghgqsptXhVgBdWUjKzkJTphRWZ1Y8I8uLymoknApl/VJlINtbvWq/6yAtmVGL69RoVRzSRUMOXCXJuc9MEeUsF2T7X94H1va5fQ/B5P0nEX61wvo71nV8Atk18W5tVlNubpKu1Ubp2xbmtpV6aG2rtPWFpV7g67wncM8YN4SrCVIwjvCDuDfD5hi7UvI6FltuQX2DzISo+5/+MEV2L7WM+RJVf0J3S4oJcZLBgl9XiHZ9LyPh0QiZml02E51SAF9QaXk9ndYJUm8B1xhulI95g96AgdjwIJzSoCGO58mBCPQqY5Cn9/Wv6SKXz4RFEmceKWuiJZErmE2u773VNMtlNXSDy75X32d3eLAPdcRnoyj/i+r4jajk7Dw/CiifEPUjthqpSu/cjSbhK7xELJZ4VoSk8K9VfHV/bel92P9efwcJYq4t1TBdoINyUX6xVlFWAGTv5hb3WFnZCVA264Nfr9zC2WNR9bS0eLsh7jFksal0icFjAo1OLcndyXu5NLqqwremky5lotTTWmrXZg+WpHzvV+uxoaZTGhhqbQIh0fhIVV+BYQBDq7MaGCQSCgCjPLqoVv7hqiwmhleXVtGztbFs8uqOlQQ71tclhHUwHe1pNqBt1EcFScyvsryEJu6njg2uM6LLYzS4l7XquqKfCQpjPFeRFGKFlPGEB870V+o97hLWMe4SXhBdjyd1dCCfxO/he4PkK7oO+hOGExUyoitwC94nFH1FGpPl/m2pAh47Hdl1462LcRxLAfi/3AhfkPQIXEtd/RoVrfCahIrwod8bnZWp+2SYUgx23H0uyv6vFRJgQQLypTqJqtVTpTGHAB831L1hhBdd5aWXdFhr+zsm5Zbk/vWRijXBTydGjYnxkoN3Eub8zLj0dTRaLRhT2I1y/Db1uiC+WLsJbCFVxDVncWPAIFWHNsghXq8gitCRXCVEVrFms11iNfl0X7MqKfAwfixdrGBEu8NdWM/mKbVtc+TyfNN61uvU9pfX3ZnVxyIepNm38YTW3qwdUMBYYp51tOlb1PhY8NhZb55vjgvyIwVpZXEnJyNic3Lg3K7fGZk2Q19RCxvpAfA/1tctQf7v+Py7dOrAZ1OXu2jPJseim55MyMZvY9QQWZWxmSdZSWROSvs4WOXawU04e7rGB1tZcv29K6yxUpddnci4fpmLxQogXVHznl9dkRYWZBbi1qV6t0QZbmFm4LFSl46Zp9yNCjADiibDg1ahgI8IPs3DzXhBiRJd8ArkDQlU8CjmNJB9TaVswlnQ8s0iQXOa38J4QaEJqGBX9el8Zy70dzZb89fzB18cF+RGAe4gFQlx1ZGxeLo9Mya37c1YhgYXDQD2kAjykQowgE28tVCiEDSwwJvmquthzidVd72BB7k4sqHewYgkoXF8G2pljfTI82CXtzQ0mKmFygQlV5QhVqdVJgpYQ1Z2JebmvCxW5AixRrGXEtau16S9JUj52qNAxPrB0sYwJJxQrXIXnY/Xteg+xoDP6+GzVz8yiPnQhIdy2ns1b8cSbCbFhaBTCVIxxYtCEN1ygHxwX5G8AlgZVCQjNrdFZuXJn2oQ4lc5IW0uDXsxOOX2kR47qRe1UV48KCRIp+2WAIkoknbC8sJwRpOt39RrpooVliGdw4lCXHD/UrZZzly1W1DuXszATjsJTIL6Op3BHFyIWI6xiQgBYl7j9eAsWAtBHj37eruOFkASiG7RQFfeRsU71BgvMnN67QpgKkebvRLDX1QusUwuexC7GB6JiBoiKdTNe4D4NUz0MLshfEyxiLL9LNyflqooM8WG+xgA8dbjXRIaLieWAdePkqzjmlpJyWwX5hi5g13QBm1AXHivwyIFOOXu0T86p1YyFVagKKBewLPGQbo3N6aIzY2ODsMTKasas4IHuFptkCFVvZ7OOiyYV51jZxl7xhEj0JpL5HML4bpgKTwCBxromJNXXFde/u11ODvXoPe6QeGPtX8W3nb/GBfkhwMVEdLEKPrk1Ke9dHZVRHYBYMwd64jboTqgQH1BBoWqCumH31v4e6p6JTZIAxGK+pgLGhOZSDfa2yZOnD6ow91t988PGR4sJViNVMrjvt+7PWqhqRBfmZRUp3jfVJQgwFSckb4kNE4pAhMPkJRGyy+q8IDGIZ3B/elG9AnIICzKv14b8An/7sHpBp8kf6OJLDJqFOExhqkeBC/IDgis6r27ZlTtT8t6V+3JVrTtcuaM6uM4N98uZI72W2LCMtw+yB6IgaFiSn9yeko9ujFsyFNeW6/n4yYPqbfSoi99oZVZBgTAMVSTjM0tyUy396/dmLFRFZQSLCHHx4UNdFlOlyoRabWKt+yFUVQhTEXcen11Wi3nBrhFeEcluvALCVCeHem3u9Km3gAexH67Ng/B5ghw5/MQP/juW3ml1vYlr7WewiokJXrk9Kb9796b85p0bFgslBvja08PynWdOqHAcsHhgvojeB9aDwrXimhFvJ0yBFUllAdUIV+/MmDhjSSPQTFosqlJfX4QYAX7jozvyu/duyhsf3zGPifHw/PnD8sqTx+TFC0csd0ByzqxAfc/7ZVzwd2KQsBmFxYlFiYQ2YRq8A+qqr6lXdPPerEzqdSuUR+YTgB7aI+dwUQ0Tqm3wrNgV64KsECOjVhNR+K2K8K/eumauNcLwwoXD8v3nT8kzZw7ZQKOg3oX4m5EX5noV5jbzNIghU6VCKICkGMLMa5jUCHQxrzcuORt7CEf8/r1bNhY+vjlu1v3hgU55+cmj8u2nT8jTpwetnJHEncdJ81BvzvXo62ixCgxCUogMZXdU4GA9E3sm8cuWe8r5in1/g4QL8ufABKQK4P2ro/Kz16/Im2oFbW7tyPkT/fJDFeKXnxy22CDXBuvHeTRwLRFiLCsmL+WCeChM2Csj07aLjQlOLw/b/FCEa09yamx6Sd66dFd+8eY1efuTu1bORijl1aeOy/eePSFPnDpogpOvonEh/jy4b1jB3NOjBzvNJWcBpqzu9m4ilDJJQoMxFeVCxcl+wwX5b2BAIAC/V3eUCcj/iXV9//mTNgFPDnXbziW3iPcOri3i1qOT90B3q01c4rNYU/f0frCRobCtd6/i9cS3qRe+eH1MLeLr8of3b8nc4qpVz3z/uZPyylPDVhFCcor4tg+HB4N7i6dDnXIhTEXPExKC1+7MyG31QqjcwErO1+qHKwH6Vbgg78IEZBvwB9fG5Gd/vqKW0D2zwIgL/uhbZ+SJkwelc3fyOcUBC4kkEJN2QIWZSTqu1iolc1hT3ItH3aMZ7whxuHRzQn6pCzILM7sNsexeVRH+norxhRMDJige8/xmFMJUgz1UobTaIowYE6a6M75gzawK+YOwVaZ8ES7ICmVYozrR//DBTRXjy3pR5mWwt93ixCTthgbaA11+FWYKFhW72CgnZDNJPjE0Y/eJe9ekHgtf/6bWMt4RNbRvXByR/3r9inx8a8Lc7OfOD8mPXjxjH3s7W+w0FR8LjwYLU0WrrUSQOuU+XfiofyRMdHlkUqbmVuxaYy3vhwqmfS/IJO5wlX7+xhX504e3LdlAsu7HL56WJ04dsAtTjFil89XUx/IxSB54NCTZbt+fsy5o9HrAmv66Hgzj4JPbk/KLN69aeIKWl0+eOijfe/6kvPjYEZsclrzdfb3zaGGKkRdgFyNhKjwQmhzdHp+zpC4NkLCUWxrqzGoOK/tWkKmXxDV97+p9+c8/fiKXb0/ZRP+BWsXffvq4Fa/jQrkWBwfuBZ4KcVt2RGI14eISwmAbLyVmlM09zC4/dg9SfvXWx3csVHVlZMrK8L77zHHzkE4MdXvVRBEhTMV9pdIGj4hFcGpu2ZJ+3GOMI8JUfD2M1vK+FOR8wmZN3r50V13Ty9aV7OThbvnHl8/JCxeO2HbnMK/C5Q6TFuE92NtmllRiZd027IzNsA07Yn0hEOWvCiuQKKT/CLHi37x9XTg44LETB+RH6h0xDmj0k0/Y+apcTLjexIy5t1jLLbogYiFfvzdrZahUZhBKIkwVtjau+06QEWN6EuOW/vKta9Za8MlTg/Ljb522Xgo0R3HKAyxituTSPY6QA4kgalsr9F9LU511Ifs8KwrvaEkt6/fVO/qpLshUUmCV4RlRTcPYxz12Sg91ySyM1PtjJJHrYVMJrUC5v2FrUbCvBBn39O7UgllE7LKiFeRLjx+VH6kY06f3YVxdJxhgwZKp7+1oMWuJsjjK4+g8hijTn/ez9az0VZhQF/j3796UX7xx1Wqbz+hC/EMdA1jF7Lij3aUTDBBakrp4PfmdnLWW1L15f9YSsNzbuN5ndCoMnsy+EWTE+PbYrMUJ3/hoxDK237E44UlzfcPm+uwnsJzoudLX9akoczgAIQnizcSAmbhUUdDQ6Fe6INMzgE0fLz1xVP7hpbNWU4yV7OGJYIKnw/2h/LG7vVEX3A3bwk6vDO4ZizJJ33K/f/tCkDlOidrGn75+Rd69PGolVD/+1hmzjulH61UU5Q8TkTAD95aTNbB8reGPWsoMas4k5PAAqije/PhufkF+9oSVNQ72tnpNcZnARhGsZaoxdrZ3rN0nXeXwdmntyRj4rEdUboRekLGKmJj/+afLFjNk99c/vETy7rD9sS7G4cJ2+LU3mxvL2X505aM5FAL9xw9v22CnERQL8stPHLPXlvME3o/gEXF/C31k6CqHR0S/ExJ9hDVoWFSOhFqQEWMSPWz2eO/KqE2+n7x0Rp49N2RurLun4YN7ijWMBYU4M1kLPZdppE4PBcbA8+eHbLD7GChP8iGMWov5k4inNI7FlzJIDl+N670tR6/n8wQ5FOYCYQr+ODZ8vH91zG7cP7x8Vp46PWi9KJxwQ0iCXrKHetusAoPG6XztyVMH5MLwgFlSTnnDWkrsmHan/8Nr560PNfXKeMMcIEEOIQyUvSBvYhlPzJtl/M7le3ay809eOivPnj1k9atuFIUf+uyyyYNsPOVxxB05uJOkHsdIceSSEw4IUbC79n/89gU5daTHkn3/8YdL8tHNiVCIclkLMnXGlDX95u0b8p4l8PLxwqfPqGVsWfTdFzqhZW09a02i8I6wjA929sqBtkHZ2ayxpC6HDHAwLY2EnHBA2OLssX75ic51TiSh9PGnaimzAxcPqZwpW0Fmes0nVuX1iyPyzid3pT3eKD9+6bSHKfYRWMbvXxm1yTizoGLc0SuxrX5ZmeqQ6EaPbG9WqyjfswQfR3KxScQJB1RgnBjqMQOMnbf0WS5shy9nUS5bQWYyvqMWELvwyMR+++lhc2Wa1aVxyzj8cOoEydv/+OMla5nZ19ot1bk+uXerSibHdWBn22Wo85Cu3JXWVpMWq1RgOOGBPAHnMVJbTmECx0UVetVwQHE5UpaCnMlt6EWftImGy0of42fOqhhTTbH7Gie8YAFdujUpP3/jqkzMLktfW5dU5Xrk3khEFhY3ZXtbZH2tSrbXW6WzqdsO4fzNO9dtolKN44QHzvMjfEFPEioV6OJH+AqLeXOr/O512Qkyu/DY307ceHphxdomsumjM964+wonzNAT+ca9Gfnlm1fl/nRCeuO7lrGK8aKKcYHk6qbcvrUlK7NxaYp2yMTMivz23RsWb3RRDhecz3fmaK+FL+izfO3OtOUORqeWyi6hW1aCTGKGZtZc7Kt3puTYgU7bfcVpt96xLfwQAqZygq3QxAobos1qCvfI3dtVsrT09yKbSm3LxP2I7Kx3SVtDm2XkOaKJn+GVF+GCjnCcCE+ijwZFJHTxoGnjWU65g7JSseXkuh1A+d7VUetb8OrTw5ZlJcDvhJ/FlTX588UR24W5vVEjm6sdMna3yizjL5pz2Yxq9kqd9DQPSEt9vbx75Z689fFd21TghIsGFWVaqr7y5DHTBMYKwry6nt19RfApG0Embsiuljc/HrFaUy46Rf/etS38ILYI6Bsf3bFTwWPRqPS1HJDUUoOsrHy1pbuyvCPzk3XSVN2rAz4ir1+8bUc2Ef5wwgVNicgnsVV+Y3PbrOQPr4+VTY1yWQgyO/HuTszbRFpcWbfStmfPDlmRuBN+1jNZ+fDamPz6rWuSXMuptdsvkmmTTPrBPKNsdltGRnKyMNkgnY09Ju6vfzhihxV46CJ8sB8Bg43NYfQ1+ZWOG7Za0/Ev6JSFIHMi7XtX7luPgsG+NmsWRJzIexOEH6zj22Pz8pt3b9jxSw3VbZKYapbRe9uSTj+4mFJ5sTBXKZW5NulobrXkHkc5La6krKbdCQ/0vhjojstrhDSHuuWW3us/vH9TxmcTgY8nB16QWdUu3Zy0uB/dnl44f1iOHezyFor7ACbPzOKKxQLp4ldT0Szri20yPlZpCbuHJZ3WnzdZLZFcu/7wiNUmf3xzXNIh6YPgfAr6cNSS/sdNnAlboCF0iQsygRZktkaPTi1aqIJdeU+dOmhlbvV+9NK+gGQM/Uk+uHZf6qO10lLdK2uJOslmvp6Vs7W1I7MzW5JNNktXc5eNqT99OCKjk4tfmBR0yhcOyaVG+ZUnhiVaVWUFATQiCnLoItCCzFla7Ma7qdbRUH+7Bes5lNQJP9QKU9yPVbO0nJGqrTZJrzTKRu6bhakQ3qWFSsmutEgs0mQ1zYTDOPTUCR80sX/sxICVxFGl89alOxa6wNgLIoEVZDLg9LalVwFbJL/1+FFrueehiv0B9aNUVYyMz0tHU5tENtpkempHcrlvPpGWEpsyPx2V1liPWk5Rs8LZ+VcOSR/n4eBQCg6qoG3nYG+bXB6ZshPoOfA4iARWkOeW1qyLF526OJjyafpUNHhVxX6AEiWqKuhVsbMVleqtTllbrpXNR6iX6VSlRLfj0tXSIbNLq/a7ym0TgfNgYMSR3KPFgkiFNSS7PDIZyLLHQAoyZW43R2fk4xsTJsKPHR+QztaG3WedMMO9p6qCeF86syEd9d2yOF0rs7OPtllMOrMts9MVEt1plfpYre3i4yzGrNcmhxJOlGHfwoXj/ZY74KxF+qAEbQEOpCBzPhrWMSVJZ4/1WajCz0ILP8wNrFSqKkbGFqSuKi6bqbikVvcmTLW4uCWT92qkMtditckf3Ri3ulUnnPR0NFnoglYLdIYjVBW0qovAqRxxPPoUcME4F++CWsecAOGEH8sb3JmWD6/dV3Wukopsm8xMVFu52l6Q29iRqckdySabpLoiZt3gaEzjO/jCSbS6ykIXz50bksqKSqve4bQhvLKgEChBZtphIV26NSGpdM6s42MHOz2Rt09YSKxZ+8Q5/Vi90ySZlQZJ73HxA1b5ZqZO6iKtVmZXyFs44YSThOh3cfRAh4xNJ2wHH73Vg0KgBJkCfSxj4nm9Hc1Wc9wZb/CG8/sAtjBz3xHkuppaqdpqlZVEcRbitWSlbKTUSpY628GXNwjKpyGN8+BQdTHQ1WKiTAOii9fHrZJni62cASBQgjyvlhGTYWU1Y5UVxI6r3DreF2AV0zxqZTUt8ViHbKcbJFekDXRZ1d7Ucq1ENptleTVrmwdmFrziIqwQujh1uEeOD3bLvckFq+hh3AWBwAjy1u6uvFujc9LeUm8XrLnBz8bbD1DZQOzWtkdH6iW31lQ067hAZr1K6qpapaGmTu7qJGUschiCE0662prk3HC/9VGmBp2xF4RGU4ERZFzEG+ouLiyvyfFD3XYcizcPCj9YoZz8cvHGmCRW0lJfFZdcqk4yX3N79Nclm92Ruupm6Yq3ydJySm6qYZAMUGzRebTUxqrN6Dt9pNfK4D68Pm7aU2qfKDCCTE0gdaAE3Y8Pdkm8sW73GSfMYB3jFZFcqY3GpDbSJmsrVdadrdgszIlEtpqkTq2mG6MzMjm3vPuMEzaIJfd1tlieir0O1+5MycjYvGyW+HivQAgypW63VYyJ27G9cbCvXaLRqt1nnTDDKTCEK5ZW1iWy3SSJuRpJrpZmUszO5WR+ulqqpUHG1UCgl4afvxdeaMmAlXxkoEOm1Eu7oRpU6oqLQAjy7FLSwhU0/Bg+2GUVFqxgTvjhoFIa/LQ210msskUW5nefKAHk8JIrESu5q9Thx/vCSHDCC83KTqooUzzA/Z6eXylpMrfkgowFwqGT96YWpbWpzuqOCVs44SeT3VArVD2jpVVpqWuRqFqmGxuljeJtb0akNtIk9TX1dmoxbqxbyeGl0Dd5sKfNErkjEwsl3ShSckFeS+fkrl4ENgXQYpNHBPPECT1swCC7XRWJyE6uQRYXSu+wsSswvVYjNZWNklzLWI3qin50wgunDw2rIUhVDXkswmelouQzgIw2gkyvCk4CoeTNCT+WzBubk7GZhIpfvawuxSSxtGNhg1KChb44XymZ1Tqrg8ZqWkisek1yiGmsq5Fjg13S1dZoHtEdXYRL1S+5pIK8rYOc+DGWEt3cOGql0psI7Qto5sNZZ+zOjNfHZStb+1Bn5O0ly8vbsrNRK3U1MdvKT3vOIPU7cB4thC0OqPbQdGhJx+XNEib3SivIugpRWkT9X29ni3R4i819AQX4lDneUc+oqjIqNRVNkn3AE6SLRXtLk3S0NFsjc8ZoJvdo2386waIj3mhhC7ZTj4zPWdVFKbyikgoyq1D+JFiR/s64NNV5Mm8/kMltWiiAjHZ1RYMkl2IyvxCsDmvUJGdTNdbjAEFOrftBqGGmrjZqyb2u1iY1FlYslFaKnZolFWTcQQY7hdmUutVEq3efccIMJ4JgISPMufWYLMxX2gGkQWJyKicbmZi0NNZZSA0vzgkvlBGwUYTQRTKVlinVJcZpsSmZIP8lXJFY01Wp0QLq1VUeP94PLC7nwwBN9TGJVtTJciKYZWWZVLXk0lGrRUaUPa0XbjAM+7taVJwrbHyWYut8yRSQ3hVmJWU3bWVqa6733hX7BAb71Pyy1EXrpFYfQYQwWmKx0iz4dFYten3PfghquIlEKlWQ49LWUm9jlNNjih1HLpkgk2UnhlhZWWF1gE1+gOm+gNM4JuYSsraekXQqKksLwV2ECaHRFSymH6fnk5IoYX2qUxzQIsKneO54RlSCFZOSCfLKWtpc11hNtW1fZF+5E34W9J7jGdVUV0ssUi/p9UqzRoPIRq5CVleqJJOO6Fhds4cTbsgZDKiVnNvc0nGasIN2i0lJBJnMNbufCJ4TR6SPAZayE27QXWKxtNusqY5JXbReKiuCewBBKrVt1RaZ9Ygk1YBYWl0P7OLhPBowDPs7W2yzCIbDfJEPvS2JIG9sbMmyDm6ymC2NtRZMd8IP931GxZjTGXY2otaEfq8OMH1UcHw8ByVQh8z73tzyvhZhhk0iXe1NVpdMn2QqwYpJSQSZbbMcv53OblrfYxfk/QELMMd00ayntqZONnNVks0GewdcQ22NxJvqrCqIvEexXVinuOCptzbVS2dbo6R0vJLYK+YuzZIIMgXXCDITM95c593d9glU1rA1NbexLZXbUanYCX7eoL4WC7nWkjsIcirjh5+GHe45nSdZhJeSKdveXyxKYyHnNi2pVxWptCA6H53wg4W8srpuxzPR93hpKfju/9qaPlYrdMxuW5jNLeTww/FOVH1RhYvhWMwTyIuuhEQMV9czNrgb6/PuoBN+qOdcXc/aAK+OVMvOVrUlzYIOp4gszlfo+43YmGXseue3cEPlFxYy8WS8otVUiAWZNDXJER64gn6y9P6AOBxizO4nErnlshDncjtSU10jDbUx693N5OSEdCe80AqYMYqlzJgt5iJcEgs5nd2QdG7TMtj1sZr8E06oIV9AuCKlooY72FRfPolcxmmtPgi1sWuPsk0n3DBGG+tYhLNmRBTrnpfAQs7v1trQR020yh7O50MiicdXrc0YbJ99Hf/f3Nr9Xv3iSmpDFpNZFcX8oMrktmRxNStrmQ17vhhs6u+m9pzyMXbnzc8V6Rc/AqLV+XFKyRsJ6R23kEMP+yNaVJTJexC2yBap81sJLOQdE2QGNjEaHs7fg1iOTCbl6v1lWVzJfOEJBsn1DbkxviI3J1bUtdowd3pmKS1X7idkNsHKviM39LmP7y7pz9w0AZ5dzshHdxZlfD5VNFdsQ8WMRC6rRmKxQiYnyyc5Ro/cqI5TFhUsfRY6J9wQTqXgIKPzMLGaNs0qBsW3kBUste2dbbM8qrzD2+cytZiW312all9/OGWCu579+wGxsbVtz/3qw0l569q8zC2nTYCX1nIyNpcysebz6aV1GVPxNetOvy+5nn8+sZr7i1W917ApJLWetQWYsqJyAjGuVlFmUWFielIv/BCmaqirsdwHVjKLcTEoWcgCiw9BrqxwQf48EmtZua+iOafW8fhCyqzavxWCpWRWRqaSsrSatU02KX1UVlRId0tMTh1skU79+KDwowshjr0gt7lpG4IQ5Bq97/kOtOXBxkaFbOYq7KQTFrVHYSHzE5gDLJg8gmJ1p9SLWtAxx3jaz7BBhKZSdKDM5fJ6VQxKELLAQlZLTf+DK1jhPSw+F+4/4treVCPp3JZauGsqap+u0qzco7NrktWvDfU0SkNttV1caie5xhxNWB15sGvLz8WC5ucRX94LsJARM3P/EeQyuu337+fk/rhaSVs7+ZDFI5icK+rFXLm/LO/emLfHR3eW7PojiKW0wCcX1y28Na+L/X4HrwhhtkU4tEk9pWBlYC25Hn8xDIbOllqpj1VbPJgQRIG19KZZzy31UelvrzMhBsTink7sSzrBsZy/CsIe92ZWLaY8p1b4XllqdM9CzBjkPMrptlst8sKWveeCMfFNuTu7Kr+/NG3iN76wLjfHV+T1y7Py/q0FS8KWimVdKCb0/aTSvgEG44ESuA317raKtEiWRJALDVoQZG9K/+U011dLX1udrOoEmVLrBcFEdKcTabOmuuO10qCCXYBhs6avpYoiu/Hlq3rOxHjNLLUaHXwHOxukNro3Sda8hZwPWRCTLTcKu0kfVchicSWri2xaelpr5enhdjk31GqLKgvj/blP23zyq7CY+WihDX189tfzeSHs8Xlvq/B9XxQW4SuFn7FXi3G5Ushxcc83dSEuBiURZENvvkvxV1OtQnCws16FrFJGdaKuqwhndIBMzKdsAhEnjjxgaOKzMAEReCw0vJRzQ3ET971qg0r8uCDIPMqNR204cO+4bx3NMRnsapATB5rlWH+TLaILu+ECvJeppXULIxCy+uD2oiVxU5lPq2kIdbx9nZDHoiVvC41wkNblVE6ujiXknRvz+po5uTK6LAldqAshEX4GiWDGAK+5NZnMl0Las04hvFbw7opBSQQZN4Cbnnf//PZ/FYhub2udzK9kbHJSU0w4orWhxmLMX0sq9LozOUngYGHH9WftpbPCvebBAI9a7fke/rI94LNeHbH9R4VqogkjDZfol9EQq5KmurzHgzhT9vi7j6flQxXjO9NJmU9mZD27ZYneD0YWLKnLPbw9uSrv3lywcANTKq2vISZ9Z3rVxs30Ylo+uLWQL39M5xN2hKgQeTwkxJyfdXdaF/19ntArkDceKv+S/yiGVpVEkKt1Ulr2Uieou0lfDSv1gY56M3tGplbNSiL5huVcV/P1NtZgCXe11EprY43MqOs8oxMS13WvQIy531j8DPS9FP9HTUNDRGp2C1a4F49KkLmH91U0Eck3rs7Jdb2v5Az62/VeK9wPwhrcc0JKjx9pk+P9zfbcNRXR5dSGnBpskWdPdMjpgy2yohbx9fFl2wXLW2yuj8rJgRZ55niHPDXcLg21VSbShLP43Tf0tVjZx/qa9DWdcqir0b6vWCVeQYfQWrQqvyGoWH2wiy7IDGX+UD6yFXUvRSAscK2YqM31NepWrsiHI4tSE41In07crxtiYEHE8j6prjJVHFfHlmVlfe/aDHKfbYeb/l7+lRMHD9bYI6LWEtb9owrrULWxoBYvNeGEJtgNltE5Qajhsz14O5vzZYxHVTi74rVmwbKI1uti3NlcK7X6ka8T/yfRSw6B/7NgD3Y3mMXdqI+muqgujDv2/ZRITqkYI9InD7RYYhhhHupu/NqLfNioVE+ee41CFctuLL6FrGPZrAz9Qz1k8eAwcUgALSZzNuGIO+LefhNIVDEBmeyEQXCPiSHuBfwuHtTyIjbldNvj8UqJt+j71wmKhf+IDGS1eskPNMj5w63y/MlOe2yoSH9sFTL5xTGqr2ltqjFRLVjmxJipS2eDz8RCvlxxcjFlr21vjNnrCjHmd67Py28/mpbfX5qxRbewXT6/63BbatQ4qtndnIV7Xq9jio8OMXws422778VqEVyCK1+hA6dK/8CIJXkK/RWcv4aJ0duGVZyPJyIEAx31cqQ3L6AIacFS47VtTTGzlPhKY221fY4VzUsojSPWTMaY57Ge2vTzehV56pfPHoqbwFtsWh974bWwCNtuN12EKSMqJ4gf8uD6MXYfVYIP4cNLOaTX/nBP/r4SQhpXkSXZBngT3MO//ZWIbkytdaxexJocwNlDrXJBxR1rmBzDxZEliyFXRSr0NVVq+eYNoS/DDaRPKcSOrVRTF65iUHRBZjgUJid/LE2GnL9noL1eXj3XY24qMCG74zF55Vy3vHCqSydgfvsxE7O3rU7OqahSJUHm/pCK9WOH26RNJzefHx9olvNDrWZR83OIHfM8v4PvR8DP6WQ+oz+Dmme7SY+YfIKkygSZ+56vAygPMBx4YCU96qTeZ8GqzVtkFV9qkXXoYkp1RmHRZjHlgagzFog3k8jDeu5rr5PXzvfIy2d7ZLi/+S9ljSwGPKjY4QGEMth2v1ebg8qNvPGQ77mDZhWjRLf4FrL+TfnJmbeQ85PT+VsQXBI4iGeBQq0wlnLBrWSMYAHztYJbizV8oLPeJiyfI9Q8z/cDFhSfY40VxliL/j6sbiznvRAcErkM6oK4lROMURKSeHU2Mb/CynwQuMZ4hzR4ohzt8r2EvHdzweLHw7qAEp76IrCGsayxoqmQYJclP4cmUlRLIOyMD0SdhlPEmy2ssfBp1zLGBr+DRCAhktuTSbk+tmI/h/r0vViUy428wZgX5Cp9FIMShCzyR6Rw3DZHo9Bv1Ak/CJnVdDLIVdzKCZrLrKdz1oKTpjOUbX5T2tXCxZqdXyG0sCiX7i5ZGdqRniZ57EibhSIiKvwsmoSfiGMWIJ9wZrDFvn90Zs12972nD0SVjn7AQk6SjuQdpW3sxqytiejiXGcfWZzxnKjoYGcniWISjAMddXK4u8ETewqGAx4LRQiP4p4/CEUXZMz+fIPymDV+TqbysTIn3BS8okLIopxClZxuwzilSx0dwBDKbwreyPce65MXTnXKaRVXwkXPnGi38jSaQ+GkIJrElXk+thtqAOLAhJsoZ+P72MlJlQQhiZ7WOnt/eEFs9qFUjucIYz17otNK5KjaAET7af0ZvIaF4MSBFnn2eKc8OdyhXpaf5JM3HApN0IrjMhRfkPVBJ356jXJ6BE3LnfBDhzesZIvLWciiPBS5s6Na4q36Xiu2rEdufS1hnm8+OUnWYqFiDfNg6zThKEIJhZ9PMo6wApuCCiGqAogyoSrCWk8ea7cH+YZC5Q0/gTrkEwMt+ed6G024CVVZIyoF4Sa8dXowLo8dbbVad0JXCDbvY79DWS4WMvmur0qGPiqKLshAuIIz1ZicdOPnj3bCDdYx9x03cD1bPo1r4q2V0taxI7FYpcTViCBkESTQboQVC+7zJIPn7bldkf88Cq9xPgVtWs/mSw9ro9USCW/ZWz6eyCGCJEk4RJAYnRNuEOQG9YxYfOsbtqWzszya1DM2Ca0haM06ZoMmyM7esLaetQNtKS1sbIgVrSFWSQSZ5AgWcl2MU13X80f7OKEGQcbCJHTR1r4jPT0lGXoPDRMzoWMUK7QwZp3wg6GI9x7TBZjwKnHkYlAaC7lKBRn3rzZqYkzSxAk3lA3Fm+ukvq5Gcls52dBH0IlGK6Q6Sslb1jbZxNVCZmFxwg+GIgsxC3CzWsjFuu8lEWQC5I31NdKok3NN3QJP7IUfklL5pFh0t7om+Pe8vq5SOrsqpbZ+25J5VAc9ioSeE2yoAEKT8I6oqkGnQp3UAwZ3a3O9xejmllYlU0aJHufhoY4TSwNRTmWyktnIqAUa7LDFTsW2bFVkZEc2pa2lQdp0vDrhh8Rz/uj/TfPkqQorFiWbEcTjetqbbQfUxGzC48j7AKxM3P6qqm1pa6ekLLilVRjCsdodyWyuSzqbk77OFmmPN+w+64QZjMREMiXbW9umUw3qzReLkglyg07O3o5mi81Mza/Ikq5I3tck3FChQIKkpqZSGhq3pT7ABieC3N6h77khZ4nIHh2rlO054SetgkxSj0ZLLbu158WiZIJMTKa/Ky6danXMLiZlRh/FagLtlAbix60t+R7O84nVQHtFGAfZrXVZ31iT7vYmGehq8fjxPoD7Tn5jYXnNjEVCVSzIxaKkQbyejiZzBZdWUjI5u2w7Y5zwEotWS3dbk1maq+mU7FRmrZIhiLS0RCSzmZKl5XUzHLpa8133nHCDUTifWLO8FjmuHl2Mi5XQg5IKckNtzOLIHOM0Obcsq+tebRFmGNjc73a1Oiqrc9LQvCk1xfMGH4qu7kqpa9ywzm6E1rDunfBDIs9CqCvrNlYJVRWTkgoyDb/7u+OWvR6fTVjowgk3WMj96hVtbudkJ5KWmlgwt81bMm8jpVZSnVrINPfx+PF+ILWelYm5hJ0WgmfEjuJiUlJBZv88LgGr0MxCUu5PJ2Rr2/tahBnqz/t0oHNeWaw+a3W+QQvN1tVVSDq3JiuplI1PFhGMByf8LCznw6ckn1mIae9QTEo+yigrOdDTaj0O7k4smKvghBcSY4QA2lvqZacyI5FosHbssTjUN25LJJZSEd6RA92tVu7mCb39AeEKjMNeDEV9FPu+l1yQ6Yt89ECntOkEHRmflzsqym4lh5uutkZL5s4uJSWxllSLNDjWJ+8l3p6TyhrCFbVy7GCXbWZxwk8mt6HWccLqkLGOSeoVex0u+Uygaceh3jYZ7GmThcSa3L4/a9upnfBCXI4BzxFG0bqMJdCCQqRqW7Yq1ySzlZKBnrgcOdDh/Sv2CejP+NyyxGqqLaxWzPrjAoGYCV1tTWqJdNpqdHtszmqS/fTb8FJbE5WB7lYLV2W3VmW7MiVFOiHnS2H81TZsSHprRbbVSzumnltna+Pus06YIWQ6OrVk8WPu+YB6cLESHGMVCEFmB9fwYJcJ893JRX0s/OUwRid80A+b2OzBnlZZy6zLZkVSmltKH6Otr6+UptasbEfWpK8zbuGKYm4KcEoHm5Rujs7I8tq6DPW3W/VXsc7R+yyBEGQsExJ7uIfLyXW5NTrrLTlDDIkSNgWRO6iJVUiscV06urZtS3UpIXQSqV2TVDYjR9VjI6zihJ/t7R2Znl9R73zewlOW0ypRI6lACDKQ3LMESmOt3Lo/J+MzS97bIsTQQYswFQN/fXNVqmLUJO8+WQLa26qko3tDcjtJi3ETrmhq8IM+9wMk86jwGptekr6OFhPkYjWk/1sCI8icWXVkoENOHOqW6cWkXLo1KUnvABda2LV3WO83XlEqk5bUxopUR0uzdb66ukI6uytkK5KQZGpNDve1m9vq4Yr9AaW21+/N6DjMWeiUCqBSERhBJoJIAf6Zo31SG6uWj29NyM3RWdnc9BK4sMIuuOHdOO1WZFXqmzaKXmYEtbUVEq1Ny9L6oh0vxhgsRQ2qUxrG1Bu/dX/WtvQTqiplV7/ACDIgxCeHemySEtP58Pq4LK6s7T7rhA2SJljJh9QaXd9Yleb2jPT2FncyoLk9vZUSa1qV5VRSjg12yoUTA3bUlBN+OBXkhlrHi2olE0Ij2VzMZkJ/S6AEma3UtDl8XCcEq9Qntyfk1ticHcnthBMauJw81G2JlawkpLNnUy3W4g3LxsaINLVmZCk9b9bxEycP6hiM+7H4+wA2oI2Mz8nVO9N2TNPJoW5pL1Eyr0CgBBkoyj59pE+OD3bZNsbLt6fssEEnnDTURi1vcLA7LovJhIrysjQ3F08MOzpE1rcWZXphWd9Hj70X3wiyP0iuZeSTW1MyMbds+avjg90SK/Gp4oETZCB+9+TpQds4cOnWhNy+P2cWlBM+OI16aKBdzh7rt80hGxUJae3cKMp26tbWiDS2pWUlu2Tboy8c77dm9E742VSvm1YNH98cl6iOwfPD/dKn3nmpPaNACjIlJ7gPZ4/2WaPo966MWmtOL4MLJ/Gmenn85AE5oFbySjohNQ2rEm/d24lRVVUh8bYttcgXZT2zLqcO91j+wo9p2h9wZNyH18asdw5lbudUkNmgVmoCKchAhy0uEpnPT25PycUb4zpxvMdFGIlUVuik6LBkGnG99HZC6htze1px0d9fJc2dKZlNzumCUKsLwoBtVvHKivCDXcdu4I9ujFneAJ0hlxEEAivINdXVcvpwrzx9etC6L7350R25fX/ey+BCCo1ccBsP93fIwuqSVDckpat7b2K5NTUVVtGxEZnXybkhj586aJPSm9DvDzgy7iM18GYWV01j8IzYzh8EAivIGCqdrQ3ywoUjtmvqxuisvHXprswvexlcWGH7/GNqqcLqxrzEO7JWI/woqa+PyNDhiGxULsjY3LzVnT5//rB0xhvdOt4HcG7n1ZEp+eTWpNXBP6mLcV9nc2DufWAFGbhIg31t8tz5ITtp4r2ro3J5ZFJy3ngolHBuHRYL9aDUBFfEEjJ4KPJIy+BaWiok3pWSta1FaaqLqRgPyVB/m4vxPoCzO6ncevfKfctNYRmfOhKsvEGgBRksxnOsX547N2Shizcu3pE74/PexD6EsFFksLfNLNaO1npJbS1IXcuq1DfsvuAbgrXdGM/IQmpaMtmsPHP2kDx+4kBJ+t46xWd1LSPvq1F38caYdZZ89uyQ7Q4O0mIceEHmUnW1NtokZQcfWxzf+PiOzC+tedVFCGmoq5GnTg9a1UUylZKlzLT09m9Ka/yb95Woa9ySzepFWVhdNM/rxceOWDLHN4GEHww4wp5/vjhiYYtndTE+c7S3ZE2EvojACzLQeOhQX7s8q1ZybSwqb1+661UXIYbjvJ4+M2iimcwkpC6elI6ubyaa7R0RaelMyfzajLmoeFxsBmBsOeEGu41z8shBjc8k5MzhXtuRSYfJoFE2o7GuNmruJSsb+89/+8516wiX3ShNhzBn78BiJZH78hNHzWKeTExLtCEp8fjDD9dIpEJ6e6vk4OGsbEVnJVqzZTmJJ9QCD0LdqbP30Hz+3SujtgmkWz2ibz1+VA72tgYyb1A2gsylo+ri5SeOyYXjA3bcyq/fvi4jY/Pe6yKENNTFzIohC57ZSMvqxqzUN2el6iE9zJgaQd39G7IZnZbVzIqNnVefGraJ6YQftIEDL964OCKZ7IYZdJRXBnUxLit/jRWNY39ee3rYNhLQFOT3792Uybll31odMjBeONvs+QtH9J63yVxyQWLNy9LVk3/uQeB1Dc0bsrY9LRMLc9bj+DvPnLDwV5WHKvYFVFW8+fFd25HHARhPqyDTkiGolN2opOqCba7ffe6kdMYbLC70hw9uyXxidfcVTligyc9xnUTfeuyItLfGJLUzK90HU3L4cPQrj3uKRiukb6BCGtoTMrc6Ix3xRvnusyfl9JFeiekYcsINhyTPJ9bk9Q9HrPUCPUpeeuKoDPaWtr3mV1GWZgJlSsQAX1HXk4v7h/dvyTuX78lqKrP7Cics0CP7ydMH5cXHDsvmdk5WN6ekayD9pR3hEOMDByLSP5SSdMWsNNZXy7efOW5jxntV7A/o5PauasIf1VijnPI11YrHjg8Efjdm2fptTfW18syZQ/KiurTEhn737k354NqYJfyccMEuupceP2ox5cTamswkx6S+eV2t5N0XfAY6xh08WC0HjqRlPjUhO7Jl30vskI5uTvhBDz65PSm/ffeG6cELFw5bVU1LY3BDFQXKVpCJD+KGvPrUMatbnV5Iyv/3x0vy9id3JemWcqigNG2wt12+99xJ28U3n1xUN2lO+g9u/VWbTsIYR4/WSN+hlEyujEoyvSZP66L92tPHbawE2VV1Hg2bW1tWb/yrt69ZiRv17HhHPR3NZXH/y1aQgcQMCZrv60RlN9/o5JL8558uW1s9dvU54YHmL7Rk/baKa29nkyRzc9LcvSxHj0WkqbFKaqKVMjRULUPDWZldH5PZxLI8duKAivgJK3HCbXXCzdb2joxNJ9RbviHX785YEu87zx5XjWgrm/tf9qOUxA8NYn704imdgAMyNbcsv3jzqm0cSaU9fBEm2FV17lifWTzUJ8+tTkt7X1KODVfJyVM1Mng0IxOJe7K4kpQnTw3Kj144ZZUVvhMv/CDGzH2qri5eH7OTo7/77AmrZy+nxTgUZgOnFp8Y6pF/fOWcPHHqoIxOLcp//OGSvH/1vqx6TDlUULL0/LkheeXJYdmRbbkze08aOxel6+CKjCXuytTiklwYHpD/pmNheLDbj/LfB1BRQenrr966Zkk8atipqMFAK7fNP6Hx4yhlwqX98Ytn5NThXjv26T//9Il8pKvletrDF2GBWnQaw7z65DFL1Kyup+Xm1IhcuX9DJuYWrVvcD9Uy5kxGSiSdcEMHt9mlVfmDWsaEKiojleZBPXcun8Qtty5+oRFkIHxB0ocJeVZd2/tT+Zjy25/ck0Ry3VZSp/whOdOrLinJXE6InpxdlrGZhHS3sS32iIWwvEdF+GEzGD0qCFNgGUd1Af7uMyfkxQuHA73548sI3ajFKqL4/59eOW8Z1vGZJfl/f3vRVk9unu/oK3/o3LWQWLPdV5xITp8TXNN0dsPOSktnNnZf6YQVjKvphRX5zTvX5RdvXrOvff+5U5b07Wwt36O4QmlGUPx/+kiPxRGfOTskiysp+a/XL1vvi/HZhGxueS/lcoV7Ryb9l29dk1+8cdXE91sXjlhv49X1jPzqrete+hhyCFNw/NLv37+lc/qGbGxsWY8bSmAJZ5VzeWPk8BM/+O8nDnVb7I1dUWEBl5VCcG5QVSSiQrwst8fnJbWetbZ7xJe8n0F5QaOYm6Nz8vM3rsjrH96W6qpK+c4zx+X7L5ySYwe67CQZDi+gf0H+/tfamPYqi/BAnTEL8m/fu2GhCvxdduG9Vqg1LqN7jYdHNVhrU72V5rU214dXkIFyl2YV5f6uFqmP1cjE3LJcuzttFnN9XdQEm7hzubo3+wXcU6plLt+ekp+qp0P1DD2Tf6BCzKaP/s64fU4zIsqfbo/N2SOnAt7aVGcLsG8KKX9otctBxyzIf/rgthlUxIy/8+wJ6e1o0fleXvd43wkysGJSs0q7RT4uJFJqZc3azj5oaoiZWLsoBxPysGTROXWcici9O9TfbtU0nCLT3tJg947Ft+ARwd2JRbmloszpEAz0RhVlT/SVL2z0wpj62Z+vyDuf3LNFlp2brz2llnGZ7sLcl4JcIFZTbRZUe7xBMrkNGVHXlhAGzatJCDXV19iK68IcDBBiJiFHdv367Wvmoi4up+TCiQH5yUtnrNaYxfSz94tJSSiKbdLc71lddG/dn5NEMm0HqOItBeW4d+fBwDsiH/DB1TH5rz9dNlHmdPIff+uM9ahgTpfrnN3XggxUYHS1NUqfurjV1VUyPb9iWyzJ1nJTKSj3mGPpoRJmLrFq3bp+9voV+eD6uDTqvSFx84PnT1m9eW1N9HMnIl9rVE+InVqUPs0trcmVkSnziEj2EsKgPKpcJ/F+gmoaWmi+od7Rf+k4uD+9KKeP9Mk/vnxOnjw9KC1lWGf8Wfa9IANua2tzndWv8hEr7Oa9GbOYKZti4mNlee+D0kBi7t7UgpUpUknBDqzjh7rkRy+elm89dtSE9qtCD0xSxBdLmcTe8mraLO2J2WV7HneXFq4eVw4uJHDvTS5a4o7qqJXVdTtnkXFw5mifebXlvqi6IO/CjeSG9nY0S19Hi1rLEcvME5+cXUqam4TLW6MPF+biwAScXVy1ZuK/fPOavHd11KpjXrhwxJJ3NI9q0oXyYUSUhC0xZo56p1Tq3tSihTAIfeAtsfB6UjdY4B2xgF5Ur4h4MZu6onqPXn162OqMDw90hGYHpgvyZ2AOFiYsx0JxUcjkX783a+37EsmUvQbhZgB4GGNvwC1dWknJ1ZFp+c07N8wauj+9ZE2BfvTiGXnlqWMy2NNm9+Dr3AK7x/EG84jYXk/o4urdadvdR8lUfV2N1OnCW+kLb8khAcvuWk75+OkbV6yEkRM+EGJ6l1AtxSIdFlyQPwesI9xXahh7OppsAhNbvnFvxuodV1NZqVELmiw9zzmPDpKrt0bnzC1FjGkqzq67588PyfeePSlPnDpgceBvGlqwZF9jrYU72nQBZnMJljIPFmGqbwhteBVG6Uilc3b/f/7GVfnTh7fMY3r23JD88IXT8vjJgzYOwubJuCB/CWTfsZYPdLfqxG2WSCWbSRJ/EWZ2hPEaQhlejfH1oXoinc1ZR743P7prrVI/vD4mW1s78tjJAfnhi6cte06f60d93E5MvR3iygewlvU+spXeSiB1AS6EqbCiXZiLAx4Kp3swv/58ccRCVXin3CMWZDb9HO7vUC+1+mt5R0HHBfkrwJLCWsK9xWXmQlGMjutEuc3MYtKEmdfhQnv88cFB8LCCWOTY2MHko7ifkiZap9KhiyL/k4d7hOO59iJ2z53injHwCVO1t9Tr4rBpoky1DRuGCh6Th6n2lkLO4OK1MfnlW1etOdBaOmv9Z37yrbO2Fb4j3mDGT1hxQX5AEFxCFP3q4mIx486urKYtvnzz/qwlAJP6uU3eWI0lBZ0vhkqW22Pz8talO3Yg7etqDdGdbaCrxXbaYQmdH+7XgUl4Yu8nIDrLWKc7HGGMGrXEKX28pqI8NrMka+sZiVZVeZhqj+Aw4k9uT8nv3rkhv3vvphk8Pe3NOhaGdSyckOHBrlBUUXwVLsgPCULLRUKUB7rjthOM0jjcbVyru+MLMpdYk5xa0azkTF4su/1uWOGKbug1weK8emfaLOFfv3NdLeMxq3BgmysHj373uRN2sgfVLpwGUuwJmLeW6+z+9qtXREhqcrc2fXR6SdbVYuN9Idj0zXBv6OtDlQvxeuL25AwoacTrbFBv5PnHDlt9+dOnB6WrnTzO/ggZuSB/DZiExBbZen30QKfFNuMqzMQ8x9T9JhGB9UcBO2JNL4WCOO83l5dJxym/hCUuj0zJnz5UIX5bhVjd0lxuU69fh5WxcbQOiTuEkGtbSqHjHrGLr18XXOKVbTopcupO39HFlmoMwlTEOYkre/7g4SkI8V0Vn7cukTO4Ju9cHpVtXbY5lv/7z5+UV54a1mvfrvoT3VdzxgX5G8A4wVrqiDdaKQ4XMG/Z5asyrrO5RC/wmFpWfM4gJPTB9zCZwzqJiQ1nN7bM5b+ibih9Bl6/eNt6T1xT6xgBO3+8X17VSffyk8N28GhvZ94iDhLcH2LHhDBYKIgv086T+8rpM9PzSUmuZUwwqASh/GofacdDo8PCQj837s3aTjtO9HjzYw6KSAl68331jihlO67/Z2flflzkPk+QK77zL/+680+vnpd//u7j5r45DwZChOU0v7xm1hTuF81saJy+ubltycG+rhY51NtmCULi0XG94PVqBSDi5ToA+btJyJCg4xSWyfll21GFBcSuOixkRJjuaxwwyWEBDDbKzdg9Vw5/N5sTiHvzN1EBUth6zVtHsPmbTg71WAyccqygLS6lAhHO5jasrnx0ask8DEJWs+plMCZY6M4N98vZo732fwzA/SjEBfAe/+3f3zTPjF7ORwY6XJAfBYjU8lraYsv3JlScJheswJ3kHwMUMSKrz4PidhqisBrGm2otKRj0MivCMMRTEWDiwgu6CLGx4p7+jSTBCjvf6Lo1uLsA8RFPgoY+5TrlClYe95UQDIsuiy8LMYstlh7CzN9LyWS+PGv/CUyhgoYywruT8ybCJEgJ45EQP6pCQyUNC9mB7nzJoeOCvOeY1ZzbNIHChWfH2T21snBNiEUyVelQhlXVpaJMvSXtIsn2I2Yt+nViz8xpJjavL9YER3x2+Kf/4f8k5djCOq3vG3d9bilpf8Pc4qodm5TUBUhfZtuSGVCIEolPwjjEYfM768IhTlwTNghNzCWs5wl9malPp0wLIcYDwBOiLWhh16flECrD8fd/HngRJLMXdKzT9Advggdjng6KXIO8CPeY0DDO85UTuz/AcUEuJltb27YVlMFJY3zc+onZhAk1VqYlAPU1xCIp9yqIc6tazfRsIORBrwWaHRHioDsde/px/ZjoWNUPW6vLNmV+J5OJ3Wq5zU3J5bas1ppQQzKVtjgptcGcTYf4IsK4oLye7DeTinAEZUrUax9UKxg3nvfKkfth3lRBgiqb3bT+zDTAx2Imvsz93NZry4KKILPVG5G266IWImGqKr135c6noaqUjelRHdOFcU3OJBKpMEODuPCpoV4Z3I2LstN1P4cmvggX5BLCYEac53Qy0xKSuBouHs2MCAEQDmDTCYOa5BLlQFjTCB1tQfk/XcqIw2J98ZE2koAwY5H+9T7/vOiyKCDEmLMIL7+D98K2Zd7Pigrwmgrwij6oDyWRlVrP2VE51q5SJ1SHTjImGiLMFvPO1gZ9NEmzvp8wC/CXwaK2pMKEVUgYgw51WIdsduA5FthBFWd697JwEaaK6/yiQod7VS4QnsEjSqys20JE0pq/kwfjFmMBrygfovo0XMXmHtfgL8cFOSAwYbGQsUgRRUIcTGQGOGECvoY4Yq2mMllLEmJh1ESxkPOuMKJsYqhCW6WWa6wmKtX6OWGEApsIr/4ehBkQZsSYmDCCSynapn6tWn8miwCij0VOs36sPdxx+kd3xhttYeB5+hCH2RV/WLiXqUxO7+GaJTXvTS7lcwmTC3Y/uU/EUfPXssnEi/AU9bZcV64715NLamGqEqhYIUyV95y2bFHGcOAg0b8YDvqRcUquhNfzt1CqhhV8UD0C/iY8vVKXMZYTLsgBBAEljIBQ8sCizYcPEOS0feTzgjWb1smPsCKyiDon7jL+mUxYvfwMILRBLwhEm8mGaBesawSgVp9DZHGpCY9gfRMqoQSJz5lYvJYHP8sn2VdTuCd4OyQ7ceetDHJhxXZ60usZMaNsDovZPA6ETL2QZvOAau05+mlYiEqtT8JAj+r6E3JhcWeM4S0RAybpTBiCsZVP2ua9N6pKqBgiJ8JZdYwHBLe3s8XCMkN97ZagZjdj2ENVe4ULcpnCJM5kN81yYcLnLZltE2CsYGCipT/zOROkIMCFz01kdy1sJhGTab9WBuw1LLRZvVckvYjDI3JmdarQ8ZG4MyJNIrWwIFpoSj9a7kC9lMIiSdw+VlOl90zvFaa0EtP790WhD+LZ1IazOBcWfISXsbOe2bDfa16Yjie8sIIBwNdS6XzZooWq1Kpn4SD8YjkOQldtzaYThXHlfH1ckB2nBCCKVK1giSJ8WKKEM9hARFiAzwlfUclB/TNhA0QR0aPGmf+zuLKFu7B48jmPzwMPilAVovxXnxOi2g1VZVWsWdj5HexUxDNC/KmOQHx7O5psExTtAlgcwpKYDBIuyI4TACx0QJhKBRMBtnxB4aEWK+JcCFPxOa8h7k+IAWFF4JHlws+xLxTQJ6oq85U4fJmdhVjSeWHPn5RjoSp9mGVesMZ3E8d5rynqoaoi4ILsOGUAlis9o7GYqXUm1EDYgXguoYfCawhZUYaHMBdAiIlBY03zfwSV8FS+CidvaRMaQYR5nZ+UUjpckB0nJKDBhB8+10JW4XXrNvh8niD78ug4ZQhaS1jBYsmxzzz0c77uYlyeuCA7juMEBBdkx3GcgOCC7DiOExBckB3HcQKCC7LjOE5AcEF2HMcJCC7IjuM4AcEF2XEcJyC4IDuO4wQEF2THcZyA4ILsOI4TEFyQHcdxAoILsuM4TkBwQXYcxwkILsiO4zgBwQXZcRwnILggO47jBAQXZMdxnIDgguw4jhMQXJAdx3ECgguy4zhOQHBBdhzHCQguyI7jOAHBBdlxHCcguCA7juMEBBdkx3GcgOCC7DiOExBckB3HcQKCC7LjOE5AcEF2HMcJCC7IjuM4AcEF2XEcJyC4IDuO4wQEF2THcZyA4ILsOI4TEFyQHcdxAkLFK//z/7Hz7LkheeHCYWmsj+1+2XEcx9lLProxLr9++7qcPtwrrz51TI4MdOQFuS4WlXhznVRF3GB2HMcpBsm1jCyupOTFx47Ia08Ny2Bvm1T8b//3z3YSyXXZ2dl9leM4jlMUotUReer0oJwf7pemhlqp+MN7N3fmEquyve2K7DiOU0yIThw72Cn9XXGprKyQitHJhZ2Nre3dpx3HcZxiUVlRYbm7hroaqYpUyv8PhZxm/8xcfwEAAAAASUVORK5CYII=" width="320" /></div><div style="border-image: none;">So to calculate the Confidence that if you have milk, you have bread too, just take the support for milk and bread (we counted 3 above) and divide it by the support for milk (it's 4 in our list). That gives a confidence of 75%. The mathematical notation for this association is Milk --> Bread (60%, 75%). The 60% is the support for the pair pattern, and the 75% is the confidence. Notice that this calculation can be done both ways. What if I know I have bread and want to know the confidence I have of there being milk also? The support for the combination of the two items is still 3, but the support for bread on its own is also 3. That means the confidence is 100% based on our data. So Bread --> Milk (60%, 100%). It's obvious from the numerical results that the Venn diagrams I drew above are definitely NOT to scale, but it makes it easier to visualize the difference between the different items and their intersection.</div>Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0tag:blogger.com,1999:blog-1030543525516570581.post-84251015880428270032015-02-10T20:20:00.000-05:002015-03-17T21:29:53.079-04:00Purpose of this BlogTo those of you who may stumble upon this blog, I want to describe the purpose of it. First let me introduce myself. At the time of this writing, I am the director of manufacturing quality for a large company that makes solar panels (if you're interested here's my <a href="http://ca.linkedin.com/pub/jed-isom/1/966/803" target="_blank">LinkedIn</a> profile). I spent the first 7+ years of my career working for General Electric (GE) in various businesses while almost always working in manufacturing quality. This gave me a heavy dose of Six Sigma training and the statistics understanding that comes with it. Even before joining GE, while getting my masters degree in mechanical engineering, I was fascinated by the power of various methods of learning from data, especially design of experiments and optimization methods. <br /><br /><br />The more I have climbed the corporate ladder, the more I have realized that I miss the awe and excitement that comes from gleaning something important and powerful out of data that nobody could see before. There are times in my line of work, when I am able to use the data generated from the factory to create software/programs that help the organization "see" what is really happening, and help it react to the issues that really matter. Being able to create this type of understanding is limited by the types of data analysis techniques one knows. The more I learn about data mining and associated topics, the more I want to understand all of the data analysis options that are out there so I can pull the right tool from the toolbox when the time comes to analyze a particularly tricky data set. That is what this blog is about.<br /><br /><br />I plan to use this blog to document what I am learning as I take various online courses, read books, and do research on the subject of data mining. I plan to make my posts simple. Having read some of the papers and taken some classes on the subject already, there are plenty of PhD's out there that teach at the level of their understanding. It may be easy for them to understand, but it doesn't make it clear for the rest of us. This blog is going to be my personal solution for that problem. I hope it helps me...and you. ;)Jed Isomhttps://plus.google.com/116736772802461278386noreply@blogger.com0