Jump to content
MakeWebGames

Meteor


Spudinski

Recommended Posts

Someone sent me a link to this earlier, and I have to say it's pretty damn awesome.

It's build on Node.js, and that is the main reason it is capable of what it is now.

As with Node.js, this is also a rather unstable product, so keep it in mind.

There is also some alerts that pop up at first, like the client having full access to the database.

Though, they have said that they are working on that aspect, and we should hopefully see it soon.

To quote from their website:

 

What is Meteor?Meteor is a way of writing applications that are ready for 2012, not 1996.

 

The first "web apps" in 1996 consisted of a web server and a database on the same rack, sending rendered HTML down to a web browser, in an arrangement much like the "dumb terminals" of the mainframe era. To this day, all of the mainstream web frameworks, from LAMP to Rails, still work on this model. All of the mainstream web technologies, from nginx to memcached, assume this model.

 

In 2012, the "dumb terminal" style of application is long gone, and instead we have a sea of smart clients: the JavaScript applications that run in our web browsers, and the native applications that run on our phones or tablets. They talk to an ever-growing array of scalable, distributed cloud services, such as Facebook Connect (authentication), Google Maps (location awareness), Amazon S3 (storage), and whatever custom services a particular app may need to run in the cloud due to security or persistence considerations.

 

Meteor is a new application platform for this new era. It is built around Smart Packages: little bundles of code that can run on a client, inside a cloud service, or both, and that can manage their lifetime inside the modern distributed environment.

 

Meteor provides a Smart Package to address each of the main challenges that developers face in this new world, such as updating a web page automatically when data changes, or performing a "hot code push" to update a running application without users noticing the change. Developers can freely pick and choose the Smart Packages they would like to use in their app. Meteor then processes the Smart Packages together with their application into a self-contained bundle that is ready to deploy into the cloud.

 

We hope that as the Meteor ecosystem grows, a wide array of Smart Packages will become available, from complex distributed cloud services, to attractive user interface components, to time-saving business process frameworks. It will be a snap for any developer to build powerful, modern applications — applications well prepared to handle the next two decades of technological change.

Have a look at their screencast of what it can do: http://meteor.com/screencast

Pretty awesome, right?

S.

Link to comment
Share on other sites

Does look cool indeed, and thanks for sharing Spudinski.

However a couple couple of things which came to my mind:

- JS is still JS, really not the nicest language out there. Too bad we are somewhat stuck to this.

- Security? It's not SQL injection here in the example, it's full DB access. So unless you can control what can be done and what not (and no clue how), this is totally useless. So don't come to say it's done in 1 h, it is actually 1 h for nothing as you can't use it.

- Server usage, for all those kind of ajax instant web pages, like google do, where basically if a user modify you get instantaneously the update on other web pages, you have basically a connection which is left open for a given amount of time, like let's say... 1-2 min, then the server close it and the browser open a new one. During the time the connection is open, the server can push the data though it. Now it's all good as it's really the fastest you can currently have via ajax, however this has a major drawback, it requires that for each connection the server keep all in the memory, and keep a TCP socket open. We do know there is an hard limit on the number of TCP sockets a single sever can handle, so I wonder how this scales up and what are the overall server usage.

In my day to day work I really don't see much benefit of such platform. I don't need live updates, I don't use much Ajax neither. Code and presentation is already separated due to the fact I use ASP.NET, and I do prefer C# over JS.

I do see however some cool stuff here, and I do see how and where it could be handy, but it need to be really though of, tested and see if it really make sense. I'm still by far not such a fan of having JS on the server, even if that would mean unifying the development.

Link to comment
Share on other sites

I agree with you Alain.

The database aspect of it is still insane, I really couldn't see a use for it at this time.

It's totally illogical to grant full access, and on top of that it only supports MongoDB at this time.

Edit: As said, additional security layer for the database is planned

I don't believe it could have been done with anything else than Node.js and still be inside the browser.

Using sockets.io is amazingly awesome as well, a npm package. It's quite well optimized in Node from what I've heard.

The exchange still puzzles me as well, it's still somewhat witchcraft to me personally.

But, at least it's not Ajax, that wouldn't have been impossible(one way connections); TCP sockets are used for connections here.

The way I understand how the exchange works on Meteor, is that it functions like any Node app that uses sockets, an open channel from server to client and visa versa.

The ability to "stream" hot copies of the application to all clients is just freakin' amazing if you ask me.

Couple that with latency compensation and it's epic, really f'n' magically awesome epic.

Edited by Spudinski
Link to comment
Share on other sites

To answer to the ajax, I didn't checked meteor, so I can't answer for that, but I strongly suggest you to open google docs (same document) across two browsers, and use firebugs to check the ajax activity.

Basically:

Your browser make an ajax call to the google server, but instead of being a quick answer, the answer hangs for like... nearly a min or so. During the minute, if there is an update to do, google send back data though the open connection. After the min, the connection is closed from the server, and the client then open again a new connection.

Why does it work like that? Well, first of all, TCP sockets or web sockets are pretty new and would not work with older browsers, plus you have all the risks to be blocked by firewalls (for example where I work it would not go though). So rely on some older technology makes more sense. Yet keeping the connection up (it's like giving very slowly data) the server can use that connection to send updates as soon as it gets them.

Of course you still lack the "action" from the user in this picture. Well in google solution, every time you type (or nearly) a new ajax call is made to send this update.

For the stream of hot copies, well it's cool but not amazing. I suggest you to check things like erlang which are odd languages which allows the full code to be updated live without stopping it. Now of course they don't do that, here they basically have a "state" which is stored on the browser side, maybe cookie or whatever, and when there is a new version they simply load the new page and restore the state. Nothing all that fancy for me. Yet it does work.

Again I wonder how useful all this is. For my own applications => not useful. If I change a soft, well basically the change will be valid on the next reload, not the end of the world in my opinion.

BTW, I coded something VERY similar with wsirc (a couple of years ago), where in case I change the js code on the server, the browser do reload the page, but keep all the state as it is (connections and history). So certainly not magic. But hey, this is the only application I ever made which may have a benefit from such feature.

For the "latency compensation" for me this is pure marketing. Sorry, but if a user do some action, simply the action is made directly on the page, and then the ajax send the thing back to the server at the possible speed. Nothing magic nor new. Again I do have the same on wsirc... and it's really not all that hard.

Link to comment
Share on other sites

To answer to the ajax, I didn't checked meteor, so I can't answer for that, but I strongly suggest you to open google docs (same document) across two browsers, and use firebugs to check the ajax activity.

Basically:

Your browser make an ajax call to the google server, but instead of being a quick answer, the answer hangs for like... nearly a min or so. During the minute, if there is an update to do, google send back data though the open connection. After the min, the connection is closed from the server, and the client then open again a new connection.

Why does it work like that? Well, first of all, TCP sockets or web sockets are pretty new and would not work with older browsers, plus you have all the risks to be blocked by firewalls (for example where I work it would not go though). So rely on some older technology makes more sense. Yet keeping the connection up (it's like giving very slowly data) the server can use that connection to send updates as soon as it gets them.

Of course you still lack the "action" from the user in this picture. Well in google solution, every time you type (or nearly) a new ajax call is made to send this update.

For the stream of hot copies, well it's cool but not amazing. I suggest you to check things like erlang which are odd languages which allows the full code to be updated live without stopping it. Now of course they don't do that, here they basically have a "state" which is stored on the browser side, maybe cookie or whatever, and when there is a new version they simply load the new page and restore the state. Nothing all that fancy for me. Yet it does work.

Again I wonder how useful all this is. For my own applications => not useful. If I change a soft, well basically the change will be valid on the next reload, not the end of the world in my opinion.

BTW, I coded something VERY similar with wsirc (a couple of years ago), where in case I change the js code on the server, the browser do reload the page, but keep all the state as it is (connections and history). So certainly not magic. But hey, this is the only application I ever made which may have a benefit from such feature.

For the "latency compensation" for me this is pure marketing. Sorry, but if a user do some action, simply the action is made directly on the page, and then the ajax send the thing back to the server at the possible speed. Nothing magic nor new. Again I do have the same on wsirc... and it's really not all that hard.

Instead of writing a lengthy post that contradicts yours in every possible way, I'll make a well intended suggestion.

I beg you to read up on Node.js principles and this application/framework(it's both) that is created thereon, you're misinterpreting the concepts discussed here.

 

Very interesting indeed.

I wonder how long it will be before its ready and usable on a live production site.

It's ready to be used anywhere, it's just still in a very early release phase.

A lot of features are popping in with every new release, just like Node.

Edited by Spudinski
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...