Tumgik
proggr-blog · 8 years
Text
Rails 4: Paginating an Array from Subset of Active Record Results With will_paginate
I’m working on a rollout for a new CMS and bumped into this issue last minute that took a bit longer to track down an answer for than I thought it would so I’m adding it here in the hopes someone else stumbles into an answer quicker than I did. 
The Scenario You’ve loaded your result set and potentially cached it with Redis or some other key value store so you have access to it as an array rather than an ActiveRecord collection. You have the number of total entires, but can’t successfully paginate the array results because will_paginate seems to require you have the complete result set handy rather than a subset of it so it either doesn’t render your list items or if you leave off the current page number it renders but always assumes its page 1. Its assumed you’ve already figured out that you’ll need to add the following either in your controller or an initializer to make will_paginate work with array collections. 
require 'will_paginate/array'
Scenario Breakdown
will_paginate expects the collection being paginated to contain the complete result set, and will populate the list of page numbers based on a combination of the current_page, per_page, and total_entries values. The issue with this is that if your array only contains the subset you want to display, it would represent the first X (X = # per_page) records in the array, and for all pages other than page 1 it would try to look into the chunk of the array corresponding to that page and find nothing (since you only have the subset you want to display) and your list would render as if it were an empty set. 
If you leave out the current page in this situation, it defaults to 1 which means it will work since its looking for the first X records which would be the results you want to display, but in this context the viewhelper doesn’t know you’re not really on page 1 and will make every page render with page 1 as the current focus which makes the ‘Next’ and ‘Previous’ links essentially useless. 
The Fix
At first the only solution I could think of was to populate an array with enough elements to fool it into thinking the complete result set was present and then slicing the results into the right spot but with 100k articles potentially making up the results that’s a horrible solution. 
After enough searching the real fix in the end is actually pretty simple and just required a paginate function to be added to my articles_controller. 
    def paginate(articles,page,limit,total_entries)            @articles = WillPaginate::Collection.create(page, limit, total_entries) do |pager|               pager.replace(articles)            end        end      
This function accepts the subset of articles you want to display as an array, the current page number, the limit, and the total number of entries. Then rather than relying on the default constructor, it passes in the page, limit, and total_entries to give the viewhelper what it needs to work and then replaces the values of the current page with the array of articles you pass it, skipping the need for it to chunk through a massive array for the 20 or so articles you actually wanted. 
Hope this helps someone :). And with that, back to the deployment.  
0 notes
proggr-blog · 11 years
Text
Githubberment
I've been toying with an idea and rather than have it die with me I thought I'd share it and see if there are any obvious shortcomings with it or ways to shape it into something more useful. It combines a transparent, versioned repository of public data, and a citizen engagement aspect to keep people informed with as much or as little data as they want.
The idea builds on the work and concept of MaMairie, a project by OpenNorth that allows users to subscribe to points of interest and receive updates for legislation, policy or budget changes that effect those points of interest. One of their biggest issues seems to be access to data, which is where this idea could play a role. 
If you're not familiar with Git or Github this may not make sense. There are self hosted versions of Github so this could all exist domestically behind any firewalls with any ACL rules that are required. Ideally I'd want to find an open source alternative to Github but Github makes a nice illustration for the goals.
The idea is to essentially hold all public data in publicly accessible git repositories and then make use of git hooks to automate some tasks to distribute this data to interested parties for various purposes. I'll use the example of changes to legislation to show how I think this could work. 
You could have a legislation repository, with the master branch being officially recognized legislation. For each session of parliament, a branch is made from the master branch, and for each bill, a branch is made from the session branch. Likewise, in an issue tracking system would create a milestone for the session, and issues for each bill, tied to the session milestone. These issues would be tagged with any relevant meta data to help organize what the changes represent. Any changes are committed to that branch, and the commit message contains a reference to a related issue number to have it automatically link to the issue. Approved bills would get merged back into the session branch and sessions upon completion would get merged back into the master branch as recognized law. This could be setup in any way to accommodate business rules such as requiring Governor General authorization through the use of pull requests and user permissions. This would all be performed through a client that simplifies the process and interacts with the underlying systems in the background so working knowledge of Git wouldn't be required by representatives, just training on the software developed for this purpose, nor would the process be as clunky as I've described because the software would handle the heavy lifting to ensure things are maintained properly. 
Whenever something is committed, git hooks would notify the user engagement service that issue X has been created and changes proposed relating to it, and any users subscribed to any data points that relate to that bill (as determined by the tagging, and perhaps some string manipulation of the diff between versions) would be notified of the changes. They would be able to view the difference between the previous version of legislation and the new version. I'm imagining the data existing as both a JSON representation of legislation and a plain text description, with the hopes that the JSON representation could create a digital model of law, but even if it was just the bill's text it would be a huge improvement. Organization of this data is definitely something I haven't even attempted to figure out, nor would I want to have to. 
The next step in the engagement system would be to allow users to upvote/downvote bills directly in a sort of poll, and provide a way for conversations to emerge with regard to proposed changes. This information could be visualized and presented back to representatives to give them an idea about feedback from their constituents or even a more broad view of the overall response so they can be more informed about people's opinions on various subjects and vote more in line with the people they represent. If the system sees that constituents are overwhelmingly against something, and the representative votes the other direction, it could also start a dialogue that allows the constituents to demand an explanation why they were ignored, perhaps flushing out corruption earlier on rather than ignoring it until it gets out of hand, or at least demanding a context for decisions. The nice thing about using a version control system as well is that at any point, if you wanted to challenge a law directly, you could reference its commit hash and remove/add any commits from the repository after a sanctioned vote if its determined to be invalid.
This system could be used for all public data. Ministry budgets, meeting minutes, video files from parliamentary sessions, etc,  all existing in version controlled repositories that are cloneable by the public. Private repositories would also be easier to manage permissions to in a system like this that's made specifically for tracking data and by tying access to SSL certs you could essentially lock data down to certain machines as well, if they're managed in a way to prevent data being removed from them. Best of all, with the wiki system and GitHub pages, it could very well provide a way to get rid of the awful, disparate, and outdated Ministry websites that plague us now by organizing those sites around the data sets that drive their existence. 
Also, via APIs this data could be exposed to applications that wish to make use of public data to build off of. I would love to see easy to use government OAuth services that applications can tap into for data. Ideally that would be built first and the user engagement system would become an application itself. 
I'm sure there's a number of shortcomings and hurdles that simply make this impossible, but from what I can see most shortcomings seems to be with existing policy and the impossible task of government buy in and not the concept itself. I may pick it up as a hobby project at some point because it could be useful even if its not used in an official setting, but I would love any feedback on it.
0 notes
proggr-blog · 11 years
Text
Reddit Hivemind Based AI
The other day on Reddit there was a conversation about shitty mods and awful views of user bases in certain subreddits. For some reason this made me start thinking about the sheer number of people and difference of opinions on the site. Clearly there is a certain hivemind that exists that skews the opinions of the site one way or another, but as you venture into more focused, less densely populated subreddits you can still find a decent representation of most views. This got me thinking that if natural language processing were able to analyze and quantify the varying "extremeness" of people's opinions based on upvote and downvote values combined with some kind of ranking determining how biased and in what direction a subreddit is perceived to be, it could almost actually become a digital hive mind by inferring the best answer to any input based on the generated answer of Reddit's for related inputs. Random thought. May not even make sense. But the combination of crowdsourced rankings along with mountains of text based views is hard to ignore as a potentially useful source of data for an AI. If we wanted to make a sentient form of Reddit anyway.....maybe it's not such a good idea.
0 notes
proggr-blog · 11 years
Link
As someone who's already (I say already because I'm still in my mid-twenties) noticed some minor hearing loss at high frequencies, this is a great breakthrough. Yet again, 3D printing is propelling us forward. And me becoming a cyborg is looking more and more likely every day. 
1 note · View note
proggr-blog · 11 years
Link
An awesome step forward for 3D printing. Staples is now carrying Cube 3D printers. For those who aren't in the US (like myself), the trusty ZIP 90210 works fine. 
1 note · View note
proggr-blog · 11 years
Link
This is an amazingly in depth look at one of the many worlds in the Mushroom Universe.
1 note · View note
proggr-blog · 11 years
Link
A Pentagon backed company AOptix has created an iPhone "case"/app combination that enables users to make use of facial, voice, and fingerprint recognition using their phone. 
0 notes
proggr-blog · 11 years
Link
With regard to the last post about graphene, I felt obligated to also share this breakthrough that I remembered watching. Cheaply made graphene super capacitor that could change the way we power devices. Its made by using a cheap CD burner to turn the graphite oxide into graphene.
0 notes
proggr-blog · 11 years
Link
This is great news, and yet another example of graphene being an incredible material. Here's to hoping Lockheed Martin doesn't get too restrictive with the technology.
0 notes
proggr-blog · 11 years
Link
“That’s where the API comes in. In my opinion, the gateway to mass-market 3D device creation is through software, much more so than lower cost and easier to use 3D printers. While I’ve stated that low-cost 3D printers for consumers will eventually arrive, the services are much more accessible and affordable today.”
3 notes · View notes
proggr-blog · 11 years
Link
This cleverly built billboard is able to produce water through humidity. I love seeing creative combinations of things we already have and things we need. I hope they scale this out everywhere. 
0 notes
proggr-blog · 11 years
Link
Brilliant talk. Food prices are going to do nothing but rise, and they're already outpacing inflation. Pair that with rising Asian markets which will lead to resource conflicts and those prices are going to rise faster still. It doesn't matter how comfy our lives are now, this will affect everybody over the next 20-30 years. Plan ahead. Grow your own food. Print your own money. 
20 notes · View notes
proggr-blog · 11 years
Link
I'm getting ready to launch sourced.fm and I can already tell I'm going to be obsessively watching stats. This looks like a gorgeous dashboard system that I just may have to look further into when I can justify the $20 a month on it. Either way, impressed me enough to give a mention of it.
0 notes
proggr-blog · 11 years
Link
Google has released an image of a prototype for a prescription friendly version of Google Glass. As a wearer of glasses myself, this has always been a concern in the back of my head. Glad to see they're working on it. Also, those frames just happen to look like my current ones so it would be an easy transition. Exciting times seeing all these new gadgets get developed and released.
0 notes
proggr-blog · 11 years
Link
NASA has announced that the curiosity has found traces of sulfur, nitrogen, hydrogen, oxygen, phosphorus and carbon in drilling samples. This coming on the heels of the potential biological fossils found inside a meteorite from Sri Lanka. Maybe we're not alone after all. 
0 notes
proggr-blog · 11 years
Link
What an interesting idea. Using the P2P nature of BitTorrent to broadcast on the web. And the more users watching a stream, the better quality the stream. I hope they offer audio only streaming. I can think of a certain personal project that would love to make use of it if they did. 
0 notes
proggr-blog · 11 years
Link
For a while I've wondered what a motion controlled keyboard could be like, and today I got my first glimpse. Fleksy, who started by developing keyboard software for the blind, released this teaser video of the technology being used with the yet-to-be-released Leap Motion controller. Looks like it could be a pretty interesting implementation of the concept.
0 notes