Jump to content
MakeWebGames

creating function


Recommended Posts

Ok, yes this is going to be massive stress on the db, but I'll find a fix with my host be it an increase in cpu or whatever.

I need to do 2 things.

I need to define a specific number on a page (sometimes during different "actions"). Then update that number in the users table and create an insert into for a log style entry.

How?

Link to comment
Share on other sites

I am not fully sure I understand what you're asking, could you give a better example? What's the specific number?

As for this "massive stress on the db", I wouldn't worry. It sounds like you just want to update a database on every page load. The problem with this forum is a lot of the developers don't actually understand what they're talking about. As a rule of thumb, I don't trust anyone that wraps all of the query arguments in backticks because they clearly don't understand this simple concept (you people know who you are, you freaks). These people are also the ones that will tell you "avoid querying the database wherever possible because MySQL is cancer and slow". Ignore it.

PHP bastardizing aside, what you are asking for really would warrant you to increase your CPU with your host.

  • Like 1
Link to comment
Share on other sites

What I've tried

global_func

function page()
{
   global $db,$ir;
   $page = $db->escape($page);
   $db->query("UPDATE users SET page={$page} WHERE userid={$ir['userid']}");
   $db->query("INSERT INTO `logs` VALUES(NULL,{$ir['userid']},{$page},time())");
}

 

In the pages:

$page = 1;
page(".{$page}.");
Link to comment
Share on other sites

For the record, it'd be

page($page);

And time() is a PHP funtion, not MySQL.

You'll need to change it to either ".time()." or simply unix_timestamp()

Oh, also [uSER=71663]IllegalPigeon[/uSER], lick my nipple!

Backticks are useful! Then again, so are standardized naming conventions (not using things like `count`, `time`, `user`, `badly,named,table,with,commas`, etc. as table/column names)

http://stackoverflow.com/a/11321508

Edited by Magictallguy
Link to comment
Share on other sites

Oh, also [uSER=71663]IllegalPigeon[/uSER], lick my nipple!

Backticks are useful! Then again, so are standardized naming conventions (not using things like `count`, `time`, `user`, `badly,named,table,with,commas`, etc. as table/column names)

http://stackoverflow.com/a/11321508

Where to begin, where to being? The *only* time backticks are useful is if you're using reserved keywords. Coding standards explicitly tell us to avoid reserved keywords, so why would you use them? Ergo, why would you use backticks? Coding standards also tell us not to use spaces in identifier names, so why do you use them on every identifier? Your Stackoverflow link backs up my point, all it does is explain the appropriate use of the backtick. And the post is 4 years old.

Backticks are not ANSI-SQL compliant. Which causes an issue with support. I get it, you guys work explicitly with Mccodes and this is the adopted method, plus you're likely never going to use another DBMS, but you're spiraling in to bad practices.

Sure, if you're an idiot and you think "select" is an appropriate name for a database, use the backticks for it. Or you could be taken seriously as a developer and just follow good practice?

Link to comment
Share on other sites

What I've tried

global_func

function page()
{
global $db,$ir;
$page = $db->escape($page);
$db->query("UPDATE users SET page={$page} WHERE userid={$ir['userid']}");
$db->query("INSERT INTO `logs` VALUES(NULL,{$ir['userid']},{$page},time())");
}

 

In the pages:

$page = 1;
page(".{$page}.");

 

your escaping a number that's not needed

Link to comment
Share on other sites

He also hasn't defined $page as an argument in the function..

function page($page = 0) {
   if(!$page || !ctype_digit($page))
       return false;
   if($ir['page'] != $page) {
       $db->query('UPDATE `users` SET `page` = '.$page.' WHERE `userid` = '.$ir['userid']);
       $db->query('INSERT INTO `logs` VALUES (NULL, '.$ir['userid'].', '.$page.', '.time().')');
   }
   return true;
}

//Usage:
page($whatever_variable_holds_the_page_number);
Link to comment
Share on other sites

If you're worried about wasting database resources logging every page load, assuming that's what you are trying to do. Probably building an anti-cheat type of system right? I would suggest skipping the database entirely and stick to text files. A lot faster than querying the database, especially when viewing the logs if there is enough of them. I used to log page loads via database and switched and won't be switching back. You can get a good 100k entries per day, which can be very taxing on the database.

I also would scrap the whole idea of a page function. Update page in the same query you update lastactive, and last IP!, there saved you a query!

Advantage of Text Files: Faster Speed, way faster if you have a lot of entries.

Disadvantage: Makes it hard to query, but if you master regex you won't even notice this disadvantage.

Here's how to append a file: (which would replace logging in the database.)

[PATH] should be substituted with the path to your file.

 

  
$file = '[PATH]/actlog.txt';
$add = "$userid,{$_SERVER['REQUEST_URI']}," . time() . ";\n";
file_put_contents($file, $add, FILE_APPEND | LOCK_EX);

 

unfortunately you cannot prepend a file in PHP, makes it harder later on, but not impossible.

#note to the people who want argue that you can prepend a file by loading the entire file into a variable and prepending the variable then saving the file, I am trying to use fewer resources not load 100k lines into a variable!!

an example of an entry would look like: "1,/index.php,1467957322;" // 1 = user's id, /index.php = page loaded, 1467957322 = time they loaded the page

rough idea of staff panel code:

by rough, I mean you'll have to add your own pagination if you want, or timestamp constraints, but you can have it show everyone(don't set the $_GET['userid']) or you can narrow it down to 1 user by setting the $_GET['userid'].

 

function view_act_hour_logs() {

   print "<table style='width:100%;'> <tr> <th>Username</th> <th>Page</th> <th>Time</th> <th>Last</th> </tr>";

   $file = '[PATH]/actlog.txt';

   $current = file_get_contents($file);

   $usid = (isset($_GET['userid'])) ? $_GET['userid'] : "[0-9]*";

   preg_match_all("/($usid),([^,]*),([0-9]*);/", $current, $out, PREG_PATTERN_ORDER);

   $out[1] = array_reverse($out[1]); //reversing because we couldn't prepend earlier!, these are the ID's

   $out[2] = array_reverse($out[2]); //reversing because we couldn't prepend earlier!, these are the Pages

   $out[3] = array_reverse($out[3]); //reversing because we couldn't prepend earlier!, These are the timestamps

   for ($i = 0; $i < count($out[1]); $i++) {

       $since = isset($last) ? $last - $out[3][$i] : 0; // see time since last load request, helps in tracking cheaters who use macros at a set time.

       $last = $out[3][$i];

       $uid = (!isset($_GET['userid'])) ? $out[1][$i] : $usid;

       print "<tr><td>" . usernameGen($out[1][$i]) . "</td> <td>{$out[2][$i]}</td><td>" . date('F j Y g:i:s a', $out[3][$i]) . "</td><td>$since</td></tr>";

   }

   print "</table>";

}

 

You can further improve on these ideas, but I'm not going to give you all my work :P. For instance on the hour cron you can copy the file to a different file for backlogs, then clear the current file, to prevent the file from getting too big. requires a bunch of edits to the staff panel then though.

But there's how you can do what you want entirely without using anymore database resources than you were already using ;). Confusing as hell? I bet.

Link to comment
Share on other sites

If you're worried about wasting database resources logging every page load, assuming that's what you are trying to do. Probably building an anti-cheat type of system right? I would suggest skipping the database entirely and stick to text files. A lot faster than querying the database, especially when viewing the logs if there is enough of them. I used to log page loads via database and switched and won't be switching back. You can get a good 100k entries per day, which can be very taxing on the database.

I also would scrap the whole idea of a page function. Update page in the same query you update lastactive, and last IP!, there saved you a query!

Advantage of Text Files: Faster Speed, way faster if you have a lot of entries.

Disadvantage: Makes it hard to query, but if you master regex you won't even notice this disadvantage.

Here's how to append a file: (which would replace logging in the database.)

[PATH] should be substituted with the path to your file.

$file = '[PATH]/actlog.txt';
$add = "$userid,{$_SERVER['REQUEST_URI']}," . time() . ";\n";
file_put_contents($file, $add, FILE_APPEND | LOCK_EX);

unfortunately you cannot prepend a file in PHP, makes it harder later on, but not impossible.

#note to the people who want argue that you can prepend a file by loading the entire file into a variable and prepending the variable then saving the file, I am trying to use fewer resources not load 100k lines into a variable!!

an example of an entry would look like: "1,/index.php,1467957322;" // 1 = user's id, /index.php = page loaded, 1467957322 = time they loaded the page

rough idea of staff panel code:

by rough, I mean you'll have to add your own pagination if you want, or timestamp constraints, but you can have it show everyone(don't set the $_GET['userid']) or you can narrow it down to 1 user by setting the $_GET['userid'].

function view_act_hour_logs() {

print "<table style='width:100%;'> <tr> <th>Username</th> <th>Page</th> <th>Time</th> <th>Last</th> </tr>";

$file = '[PATH]/actlog.txt';

$current = file_get_contents($file);

$usid = (isset($_GET['userid'])) ? $_GET['userid'] : "[0-9]*";

preg_match_all("/($usid),([^,]*),([0-9]*);/", $current, $out, PREG_PATTERN_ORDER);

$out[1] = array_reverse($out[1]); //reversing because we couldn't prepend earlier!, these are the ID's

$out[2] = array_reverse($out[2]); //reversing because we couldn't prepend earlier!, these are the Pages

$out[3] = array_reverse($out[3]); //reversing because we couldn't prepend earlier!, These are the timestamps

for ($i = 0; $i < count($out[1]); $i++) {

$since = isset($last) ? $last - $out[3][$i] : 0; // see time since last load request, helps in tracking cheaters who use macros at a set time.

$last = $out[3][$i];

$uid = (!isset($_GET['userid'])) ? $out[1][$i] : $usid;

print "<tr><td>" . usernameGen($out[1][$i]) . "</td> <td>{$out[2][$i]}</td><td>" . date('F j Y g:i:s a', $out[3][$i]) . "</td><td>$since</td></tr>";

}

print "</table>";

}

You can further improve on these ideas, but I'm not going to give you all my work :P. For instance on the hour cron you can copy the file to a different file for backlogs, then clear the current file, to prevent the file from getting too big. requires a bunch of edits to the staff panel then though.

But there's how you can do what you want entirely without using anymore database resources than you were already using ;). Confusing as hell? I bet.

How shit does your database have to be that 100,000 queries becomes taxing? I have a client site that, on certain pages, makes up to 100 queries per request and the page load is still loading in less than 0.1 second.

Heck, I have another client that I built a CSV import script for that runs complex reports on the content of the CSV. When I say complex, I mean complex. Not "comparing data" but generating lots of different information for each entry. The CSV sizes can range from 25,000 to 500,000. A lot of the data is discarded after the reports are generated, so their database sits at a steady 5,000,000 entries, so the database isn't big, but the queries it does on each upload is.

Now, they do this several times a day. The part that slows the process down the most is reading and arranging the large CSV's before performing the queries. Sure, chunking helps but it's still quite slow. As I write this, the client will be uploading his daily "small" list of about 50,000 entires to kick start the day. That would have imported way before I've ended this post. Don't re-invent the wheel, there's no point in writing these logs to a file when it's more effective to store them in the database. You say "100,000" like it's actually a lot of entires.

Another site I've worked on got so much traffic, they were handling 40,000 requests per second. The database was very well structured, though, unlikely McCodes but that's not the point. Some MySQL database can easily handle up to 300 million records and still perform absolutely fine, if the database is built correctly.

Storing the data that OP asked for in a database is going to be MUCH easier, especially since he is storing items against a user, which will make it easier in the future to pull logs associated to a user. Also, storing data in the database (like you're supposed to) is going to be 10x's easier if you have multiple servers, for example (I know, uncommon with McCodes but still a valid point). We could also get in to multithreading and concurrent access, but I think I've stressed my point.

Link to comment
Share on other sites

exploding would have to be exploded on semicolons then on commas, but that defeats the purpose of searching on a userid without regex.

And I don't know your limit on page execution time, but I prefer to keep mine all under .2 seconds. When looking through logs with hundreds of thousands of entries, it tends to be slow via database.

using filesystem+regex => Total execution time in seconds: 0.045760869979858 (8)

don't have it set up with the database to give an accurate timing, but it was well over .4 seconds, some admins are okay with slow pages, I however prefer speed over everything.

Yes, mccodes is a poorly designed database, pretty sure the original coders have never even heard of an index tongue.png

Link to comment
Share on other sites

exploding would have to be exploded on semicolons then on commas, but that defeats the purpose of searching on a userid without regex.

And I don't know your limit on page execution time, but I prefer to keep mine all under .2 seconds. When looking through logs with hundreds of thousands of entries, it tends to be slow via database.

using filesystem+regex => Total execution time in seconds: 0.045760869979858 (8)

don't have it set up with the database to give an accurate timing, but it was well over .4 seconds, some admins are okay with slow pages, I however prefer speed over everything.

Yes, mccodes is a poorly designed database, pretty sure the original coders have never even heard of an index tongue.png

Before we go in to this, I want to stress that my "0.1 second" comment was about a site that handles 100 queries per request. Part of my job is to pen-test and speed up applications, I do it a lot. So your comment "I however prefer speed over everything" is meaningless to me because it is literally my job.

Right, I wasn't going to reply to this because you clearly don't know what you're talking about. I felt, however, I must. Because you're spreading incorrect information. Therefore, I am posting code examples and screenshots so anyone can replicate. I want to stress that inserting smaller logs into a text file would likely be MARGINALLY quicker than a database. MARGINALLY. But for this instance, I am using data sets of 100,000 and 10,000.

Both tests were done on the same server using the exact same Laravel install (it's really easy for me to spin up a Laravel instance).

Lets insert 100,000 rows in to the database using SQL. (I use 10,000 rows for yours, btw, so you know it's fair).

http://i.imgur.com/iJ4b0MN.png

Now, lets run a loop of 10,000 using your code.

http://i.imgur.com/pS6uPTT.png

Right, fine. I understand that's not a fair comparison. So, lets run 10,000 inserts to the database using a RAW database query from the Laravel facade.

http://i.imgur.com/lVBQ6Mn.png

Route::get('disproveanoob', function() {

   $start = microtime(true);

   for ($x = 0; $x <= 10000; $x++) {
       \DB::table('logs')->insert(
           ['user' => 1, 'url' => 'fgdgdfgdfg', 'time' => 123456789]);
   }
   $time_elapsed_secs = microtime(true) - $start;

   echo 'Time elapsed: ' . $time_elapsed_secs;        
});

Perfect, I win! Okay, now lets display the data, sound good? Right, lets fetch 5,000 results from my database that consists of 110,001 entries!

http://i.imgur.com/rHWYcwO.png

Route::get('disproveanoob', function() {

   $start = microtime(true);

   $logs = \DB::table('logs')->select(['user', 'url', 'time'])->take(5000)->get();

   foreach($logs as $log) {
       echo 'User: ' . $log->user . '<br />';
       echo 'Url: ' . $log->url . '<br />';
       echo 'Time: ' . $log->time . '<br />';
   }
   $time_elapsed_secs = microtime(true) - $start;

   echo 'Time elapsed: ' . $time_elapsed_secs;        
});

Using your code, fetching 5,000 results from a 10,000 strong log file:

http://i.imgur.com/4wg9zMr.png

Right, so, I win again. It's worth noting that Laravel has created a well structured table for me, the only index is ID, set to auto-increment.

Now, you could argue that your results are formatted, to use the table. That's fine, but it's only going to add a minuscule amount of time to the execution time. The code I did will still win.

Plus, using the database, I have all of the added benefits I mentioned earlier. So, with all of that in mind and EVEN IF your code performed quicker than mine by 50%, we are talking milliseconds, you still prefer to use the log method over the database, when the database has so many more benefits?

Edited by IllegalPigeon
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...